2026-01-10 13:43:17.893050 | Job console starting 2026-01-10 13:43:17.911895 | Updating git repos 2026-01-10 13:43:18.018545 | Cloning repos into workspace 2026-01-10 13:43:18.510689 | Restoring repo states 2026-01-10 13:43:18.560543 | Merging changes 2026-01-10 13:43:19.238342 | Checking out repos 2026-01-10 13:43:19.496859 | Preparing playbooks 2026-01-10 13:43:20.635853 | Running Ansible setup 2026-01-10 13:43:27.055480 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-01-10 13:43:28.669370 | 2026-01-10 13:43:28.669537 | PLAY [Base pre] 2026-01-10 13:43:28.687678 | 2026-01-10 13:43:28.687846 | TASK [Setup log path fact] 2026-01-10 13:43:28.709565 | orchestrator | ok 2026-01-10 13:43:28.732437 | 2026-01-10 13:43:28.732618 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-10 13:43:28.837876 | orchestrator | ok 2026-01-10 13:43:28.868361 | 2026-01-10 13:43:28.868504 | TASK [emit-job-header : Print job information] 2026-01-10 13:43:28.964309 | # Job Information 2026-01-10 13:43:28.964507 | Ansible Version: 2.16.14 2026-01-10 13:43:28.964543 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2026-01-10 13:43:28.964579 | Pipeline: label 2026-01-10 13:43:28.964603 | Executor: 521e9411259a 2026-01-10 13:43:28.964624 | Triggered by: https://github.com/osism/testbed/pull/2818 2026-01-10 13:43:28.964645 | Event ID: 4a28f280-ee2a-11f0-88a7-9820c87091e7 2026-01-10 13:43:28.978319 | 2026-01-10 13:43:28.978479 | LOOP [emit-job-header : Print node information] 2026-01-10 13:43:29.237164 | orchestrator | ok: 2026-01-10 13:43:29.237421 | orchestrator | # Node Information 2026-01-10 13:43:29.237462 | orchestrator | Inventory Hostname: orchestrator 2026-01-10 13:43:29.237487 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-01-10 13:43:29.237510 | orchestrator | Username: zuul-testbed03 2026-01-10 13:43:29.237533 | orchestrator | Distro: Debian 12.12 2026-01-10 13:43:29.237557 | orchestrator | Provider: static-testbed 2026-01-10 13:43:29.237579 | orchestrator | Region: 2026-01-10 13:43:29.237600 | orchestrator | Label: testbed-orchestrator 2026-01-10 13:43:29.237620 | orchestrator | Product Name: OpenStack Nova 2026-01-10 13:43:29.237639 | orchestrator | Interface IP: 81.163.193.140 2026-01-10 13:43:29.267729 | 2026-01-10 13:43:29.267878 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-01-10 13:43:30.020369 | orchestrator -> localhost | changed 2026-01-10 13:43:30.029458 | 2026-01-10 13:43:30.029602 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-01-10 13:43:31.668628 | orchestrator -> localhost | changed 2026-01-10 13:43:31.690481 | 2026-01-10 13:43:31.690655 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-01-10 13:43:32.167183 | orchestrator -> localhost | ok 2026-01-10 13:43:32.175322 | 2026-01-10 13:43:32.175468 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-01-10 13:43:32.232383 | orchestrator | ok 2026-01-10 13:43:32.254076 | orchestrator | included: /var/lib/zuul/builds/e5b662a5167846cbb307ba316b919d7d/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-01-10 13:43:32.262820 | 2026-01-10 13:43:32.263049 | TASK [add-build-sshkey : Create Temp SSH key] 2026-01-10 13:43:34.546796 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-01-10 13:43:34.547120 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/e5b662a5167846cbb307ba316b919d7d/work/e5b662a5167846cbb307ba316b919d7d_id_rsa 2026-01-10 13:43:34.547163 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/e5b662a5167846cbb307ba316b919d7d/work/e5b662a5167846cbb307ba316b919d7d_id_rsa.pub 2026-01-10 13:43:34.547190 | orchestrator -> localhost | The key fingerprint is: 2026-01-10 13:43:34.547215 | orchestrator -> localhost | SHA256:Zv6hRkHA9YVkL1/IDj8md8R9iLEUeYkhvQhORwKyuzg zuul-build-sshkey 2026-01-10 13:43:34.547237 | orchestrator -> localhost | The key's randomart image is: 2026-01-10 13:43:34.547273 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-01-10 13:43:34.547295 | orchestrator -> localhost | | ..o+oo=o*= . | 2026-01-10 13:43:34.547317 | orchestrator -> localhost | | o. ++o*+*oo | 2026-01-10 13:43:34.547338 | orchestrator -> localhost | | . + o+.*o= o| 2026-01-10 13:43:34.547358 | orchestrator -> localhost | | . o .*.o .| 2026-01-10 13:43:34.547378 | orchestrator -> localhost | | . S.. B . | 2026-01-10 13:43:34.547405 | orchestrator -> localhost | | . . +. + o | 2026-01-10 13:43:34.547425 | orchestrator -> localhost | | E . .. . | 2026-01-10 13:43:34.547445 | orchestrator -> localhost | | . .o . | 2026-01-10 13:43:34.547465 | orchestrator -> localhost | | .. . | 2026-01-10 13:43:34.547486 | orchestrator -> localhost | +----[SHA256]-----+ 2026-01-10 13:43:34.547542 | orchestrator -> localhost | ok: Runtime: 0:00:01.406541 2026-01-10 13:43:34.555458 | 2026-01-10 13:43:34.555600 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-01-10 13:43:34.614200 | orchestrator | ok 2026-01-10 13:43:34.625653 | orchestrator | included: /var/lib/zuul/builds/e5b662a5167846cbb307ba316b919d7d/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-01-10 13:43:34.635572 | 2026-01-10 13:43:34.635712 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-01-10 13:43:34.660224 | orchestrator | skipping: Conditional result was False 2026-01-10 13:43:34.669036 | 2026-01-10 13:43:34.669176 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-01-10 13:43:35.309197 | orchestrator | changed 2026-01-10 13:43:35.317606 | 2026-01-10 13:43:35.317745 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-01-10 13:43:35.608974 | orchestrator | ok 2026-01-10 13:43:35.615734 | 2026-01-10 13:43:35.615869 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-01-10 13:43:36.065320 | orchestrator | ok 2026-01-10 13:43:36.072082 | 2026-01-10 13:43:36.072218 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-01-10 13:43:36.551676 | orchestrator | ok 2026-01-10 13:43:36.562124 | 2026-01-10 13:43:36.562271 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-01-10 13:43:36.639689 | orchestrator | skipping: Conditional result was False 2026-01-10 13:43:36.647126 | 2026-01-10 13:43:36.647256 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-01-10 13:43:37.468718 | orchestrator -> localhost | changed 2026-01-10 13:43:37.487346 | 2026-01-10 13:43:37.487496 | TASK [add-build-sshkey : Add back temp key] 2026-01-10 13:43:37.889470 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/e5b662a5167846cbb307ba316b919d7d/work/e5b662a5167846cbb307ba316b919d7d_id_rsa (zuul-build-sshkey) 2026-01-10 13:43:37.889725 | orchestrator -> localhost | ok: Runtime: 0:00:00.026641 2026-01-10 13:43:37.900593 | 2026-01-10 13:43:37.900728 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-01-10 13:43:38.456101 | orchestrator | ok 2026-01-10 13:43:38.464611 | 2026-01-10 13:43:38.464765 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-01-10 13:43:38.489677 | orchestrator | skipping: Conditional result was False 2026-01-10 13:43:38.557855 | 2026-01-10 13:43:38.558024 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-01-10 13:43:39.137138 | orchestrator | ok 2026-01-10 13:43:39.163059 | 2026-01-10 13:43:39.163223 | TASK [validate-host : Define zuul_info_dir fact] 2026-01-10 13:43:39.245595 | orchestrator | ok 2026-01-10 13:43:39.268073 | 2026-01-10 13:43:39.268259 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-01-10 13:43:39.757368 | orchestrator -> localhost | ok 2026-01-10 13:43:39.769088 | 2026-01-10 13:43:39.769304 | TASK [validate-host : Collect information about the host] 2026-01-10 13:43:41.196119 | orchestrator | ok 2026-01-10 13:43:41.216794 | 2026-01-10 13:43:41.216944 | TASK [validate-host : Sanitize hostname] 2026-01-10 13:43:41.305002 | orchestrator | ok 2026-01-10 13:43:41.311447 | 2026-01-10 13:43:41.311587 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-01-10 13:43:42.250357 | orchestrator -> localhost | changed 2026-01-10 13:43:42.257307 | 2026-01-10 13:43:42.257430 | TASK [validate-host : Collect information about zuul worker] 2026-01-10 13:43:42.744710 | orchestrator | ok 2026-01-10 13:43:42.750759 | 2026-01-10 13:43:42.750925 | TASK [validate-host : Write out all zuul information for each host] 2026-01-10 13:43:43.556153 | orchestrator -> localhost | changed 2026-01-10 13:43:43.572703 | 2026-01-10 13:43:43.572851 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-01-10 13:43:43.869377 | orchestrator | ok 2026-01-10 13:43:43.876803 | 2026-01-10 13:43:43.876943 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-01-10 13:44:41.603567 | orchestrator | changed: 2026-01-10 13:44:41.603883 | orchestrator | .d..t...... src/ 2026-01-10 13:44:41.603939 | orchestrator | .d..t...... src/github.com/ 2026-01-10 13:44:41.604006 | orchestrator | .d..t...... src/github.com/osism/ 2026-01-10 13:44:41.604044 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-01-10 13:44:41.604078 | orchestrator | RedHat.yml 2026-01-10 13:44:41.625125 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-01-10 13:44:41.625146 | orchestrator | RedHat.yml 2026-01-10 13:44:41.625208 | orchestrator | = 1.53.0"... 2026-01-10 13:44:51.506672 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-01-10 13:44:51.525021 | orchestrator | - Finding latest version of hashicorp/null... 2026-01-10 13:44:51.696980 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-01-10 13:44:52.531449 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-01-10 13:44:52.765691 | orchestrator | - Installing hashicorp/local v2.6.1... 2026-01-10 13:44:53.271323 | orchestrator | - Installed hashicorp/local v2.6.1 (signed, key ID 0C0AF313E5FD9F80) 2026-01-10 13:44:53.518429 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-01-10 13:44:54.391556 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-01-10 13:44:54.391734 | orchestrator | 2026-01-10 13:44:54.391748 | orchestrator | Providers are signed by their developers. 2026-01-10 13:44:54.391760 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-01-10 13:44:54.391770 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-01-10 13:44:54.391794 | orchestrator | 2026-01-10 13:44:54.391805 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-01-10 13:44:54.391815 | orchestrator | selections it made above. Include this file in your version control repository 2026-01-10 13:44:54.391832 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-01-10 13:44:54.391842 | orchestrator | you run "tofu init" in the future. 2026-01-10 13:44:54.391945 | orchestrator | 2026-01-10 13:44:54.391961 | orchestrator | OpenTofu has been successfully initialized! 2026-01-10 13:44:54.391989 | orchestrator | 2026-01-10 13:44:54.392007 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-01-10 13:44:54.392016 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-01-10 13:44:54.392038 | orchestrator | should now work. 2026-01-10 13:44:54.392047 | orchestrator | 2026-01-10 13:44:54.392057 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-01-10 13:44:54.392066 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-01-10 13:44:54.392076 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-01-10 13:44:54.571254 | orchestrator | Created and switched to workspace "ci"! 2026-01-10 13:44:54.571505 | orchestrator | 2026-01-10 13:44:54.571527 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-01-10 13:44:54.571540 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-01-10 13:44:54.571552 | orchestrator | for this configuration. 2026-01-10 13:44:54.751828 | orchestrator | ci.auto.tfvars 2026-01-10 13:44:54.751954 | orchestrator | default_custom.tf 2026-01-10 13:44:56.409550 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-01-10 13:44:56.929929 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-01-10 13:44:57.180883 | orchestrator | 2026-01-10 13:44:57.180962 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-01-10 13:44:57.180971 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-01-10 13:44:57.181000 | orchestrator | + create 2026-01-10 13:44:57.181021 | orchestrator | <= read (data resources) 2026-01-10 13:44:57.181037 | orchestrator | 2026-01-10 13:44:57.181042 | orchestrator | OpenTofu will perform the following actions: 2026-01-10 13:44:57.181175 | orchestrator | 2026-01-10 13:44:57.181192 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-01-10 13:44:57.181197 | orchestrator | # (config refers to values not yet known) 2026-01-10 13:44:57.181202 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-01-10 13:44:57.181207 | orchestrator | + checksum = (known after apply) 2026-01-10 13:44:57.181212 | orchestrator | + created_at = (known after apply) 2026-01-10 13:44:57.181217 | orchestrator | + file = (known after apply) 2026-01-10 13:44:57.181221 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.181244 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:57.181249 | orchestrator | + min_disk_gb = (known after apply) 2026-01-10 13:44:57.181254 | orchestrator | + min_ram_mb = (known after apply) 2026-01-10 13:44:57.181259 | orchestrator | + most_recent = true 2026-01-10 13:44:57.181264 | orchestrator | + name = (known after apply) 2026-01-10 13:44:57.181268 | orchestrator | + protected = (known after apply) 2026-01-10 13:44:57.181273 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.181280 | orchestrator | + schema = (known after apply) 2026-01-10 13:44:57.181285 | orchestrator | + size_bytes = (known after apply) 2026-01-10 13:44:57.181313 | orchestrator | + tags = (known after apply) 2026-01-10 13:44:57.181318 | orchestrator | + updated_at = (known after apply) 2026-01-10 13:44:57.181322 | orchestrator | } 2026-01-10 13:44:57.181424 | orchestrator | 2026-01-10 13:44:57.181438 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-01-10 13:44:57.181443 | orchestrator | # (config refers to values not yet known) 2026-01-10 13:44:57.181448 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-01-10 13:44:57.181453 | orchestrator | + checksum = (known after apply) 2026-01-10 13:44:57.181457 | orchestrator | + created_at = (known after apply) 2026-01-10 13:44:57.181461 | orchestrator | + file = (known after apply) 2026-01-10 13:44:57.181466 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.181470 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:57.181474 | orchestrator | + min_disk_gb = (known after apply) 2026-01-10 13:44:57.181479 | orchestrator | + min_ram_mb = (known after apply) 2026-01-10 13:44:57.181483 | orchestrator | + most_recent = true 2026-01-10 13:44:57.181488 | orchestrator | + name = (known after apply) 2026-01-10 13:44:57.181492 | orchestrator | + protected = (known after apply) 2026-01-10 13:44:57.181497 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.181501 | orchestrator | + schema = (known after apply) 2026-01-10 13:44:57.181505 | orchestrator | + size_bytes = (known after apply) 2026-01-10 13:44:57.181510 | orchestrator | + tags = (known after apply) 2026-01-10 13:44:57.181514 | orchestrator | + updated_at = (known after apply) 2026-01-10 13:44:57.181519 | orchestrator | } 2026-01-10 13:44:57.181604 | orchestrator | 2026-01-10 13:44:57.181618 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-01-10 13:44:57.181623 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-01-10 13:44:57.181628 | orchestrator | + content = (known after apply) 2026-01-10 13:44:57.181633 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-10 13:44:57.181637 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-10 13:44:57.181642 | orchestrator | + content_md5 = (known after apply) 2026-01-10 13:44:57.181646 | orchestrator | + content_sha1 = (known after apply) 2026-01-10 13:44:57.181650 | orchestrator | + content_sha256 = (known after apply) 2026-01-10 13:44:57.181655 | orchestrator | + content_sha512 = (known after apply) 2026-01-10 13:44:57.181659 | orchestrator | + directory_permission = "0777" 2026-01-10 13:44:57.181663 | orchestrator | + file_permission = "0644" 2026-01-10 13:44:57.181668 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-01-10 13:44:57.181672 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.181676 | orchestrator | } 2026-01-10 13:44:57.181755 | orchestrator | 2026-01-10 13:44:57.181768 | orchestrator | # local_file.id_rsa_pub will be created 2026-01-10 13:44:57.181773 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-01-10 13:44:57.181777 | orchestrator | + content = (known after apply) 2026-01-10 13:44:57.181782 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-10 13:44:57.181786 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-10 13:44:57.181790 | orchestrator | + content_md5 = (known after apply) 2026-01-10 13:44:57.181795 | orchestrator | + content_sha1 = (known after apply) 2026-01-10 13:44:57.181799 | orchestrator | + content_sha256 = (known after apply) 2026-01-10 13:44:57.181803 | orchestrator | + content_sha512 = (known after apply) 2026-01-10 13:44:57.181807 | orchestrator | + directory_permission = "0777" 2026-01-10 13:44:57.181812 | orchestrator | + file_permission = "0644" 2026-01-10 13:44:57.181821 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-01-10 13:44:57.181825 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.181830 | orchestrator | } 2026-01-10 13:44:57.181905 | orchestrator | 2026-01-10 13:44:57.181924 | orchestrator | # local_file.inventory will be created 2026-01-10 13:44:57.181929 | orchestrator | + resource "local_file" "inventory" { 2026-01-10 13:44:57.181934 | orchestrator | + content = (known after apply) 2026-01-10 13:44:57.181938 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-10 13:44:57.181942 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-10 13:44:57.181947 | orchestrator | + content_md5 = (known after apply) 2026-01-10 13:44:57.181951 | orchestrator | + content_sha1 = (known after apply) 2026-01-10 13:44:57.181956 | orchestrator | + content_sha256 = (known after apply) 2026-01-10 13:44:57.181960 | orchestrator | + content_sha512 = (known after apply) 2026-01-10 13:44:57.181965 | orchestrator | + directory_permission = "0777" 2026-01-10 13:44:57.181972 | orchestrator | + file_permission = "0644" 2026-01-10 13:44:57.181980 | orchestrator | + filename = "inventory.ci" 2026-01-10 13:44:57.181986 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.181990 | orchestrator | } 2026-01-10 13:44:57.182101 | orchestrator | 2026-01-10 13:44:57.182116 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-01-10 13:44:57.182122 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-01-10 13:44:57.182126 | orchestrator | + content = (sensitive value) 2026-01-10 13:44:57.182131 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-10 13:44:57.182135 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-10 13:44:57.182139 | orchestrator | + content_md5 = (known after apply) 2026-01-10 13:44:57.182144 | orchestrator | + content_sha1 = (known after apply) 2026-01-10 13:44:57.182148 | orchestrator | + content_sha256 = (known after apply) 2026-01-10 13:44:57.182153 | orchestrator | + content_sha512 = (known after apply) 2026-01-10 13:44:57.182157 | orchestrator | + directory_permission = "0700" 2026-01-10 13:44:57.182162 | orchestrator | + file_permission = "0600" 2026-01-10 13:44:57.182166 | orchestrator | + filename = ".id_rsa.ci" 2026-01-10 13:44:57.182170 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.182175 | orchestrator | } 2026-01-10 13:44:57.182199 | orchestrator | 2026-01-10 13:44:57.182211 | orchestrator | # null_resource.node_semaphore will be created 2026-01-10 13:44:57.182216 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-01-10 13:44:57.182221 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.182225 | orchestrator | } 2026-01-10 13:44:57.182359 | orchestrator | 2026-01-10 13:44:57.182380 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-01-10 13:44:57.182386 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-01-10 13:44:57.182390 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:57.182394 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.182399 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.182403 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:57.182408 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:57.182412 | orchestrator | + name = "testbed-volume-manager-base" 2026-01-10 13:44:57.182416 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.182421 | orchestrator | + size = 80 2026-01-10 13:44:57.182425 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:57.182430 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:57.182434 | orchestrator | } 2026-01-10 13:44:57.182508 | orchestrator | 2026-01-10 13:44:57.182521 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-01-10 13:44:57.182526 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-10 13:44:57.182531 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:57.182535 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.182540 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.182550 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:57.182555 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:57.182559 | orchestrator | + name = "testbed-volume-0-node-base" 2026-01-10 13:44:57.182564 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.182568 | orchestrator | + size = 80 2026-01-10 13:44:57.182572 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:57.182577 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:57.182581 | orchestrator | } 2026-01-10 13:44:57.182651 | orchestrator | 2026-01-10 13:44:57.182664 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-01-10 13:44:57.182669 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-10 13:44:57.182673 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:57.182678 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.182682 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.182686 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:57.182691 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:57.182695 | orchestrator | + name = "testbed-volume-1-node-base" 2026-01-10 13:44:57.182699 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.182704 | orchestrator | + size = 80 2026-01-10 13:44:57.182708 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:57.182712 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:57.182717 | orchestrator | } 2026-01-10 13:44:57.182785 | orchestrator | 2026-01-10 13:44:57.182798 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-01-10 13:44:57.182803 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-10 13:44:57.182807 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:57.182811 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.182816 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.182820 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:57.182824 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:57.182829 | orchestrator | + name = "testbed-volume-2-node-base" 2026-01-10 13:44:57.182833 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.182837 | orchestrator | + size = 80 2026-01-10 13:44:57.182842 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:57.182846 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:57.182851 | orchestrator | } 2026-01-10 13:44:57.182919 | orchestrator | 2026-01-10 13:44:57.182932 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-01-10 13:44:57.182937 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-10 13:44:57.182942 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:57.182946 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.182950 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.182955 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:57.182959 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:57.182968 | orchestrator | + name = "testbed-volume-3-node-base" 2026-01-10 13:44:57.182972 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.182977 | orchestrator | + size = 80 2026-01-10 13:44:57.182981 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:57.182985 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:57.182990 | orchestrator | } 2026-01-10 13:44:57.183057 | orchestrator | 2026-01-10 13:44:57.183071 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-01-10 13:44:57.183076 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-10 13:44:57.183080 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:57.183084 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.183089 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.183098 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:57.183102 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:57.183106 | orchestrator | + name = "testbed-volume-4-node-base" 2026-01-10 13:44:57.183111 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.183115 | orchestrator | + size = 80 2026-01-10 13:44:57.183119 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:57.183124 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:57.183128 | orchestrator | } 2026-01-10 13:44:57.183198 | orchestrator | 2026-01-10 13:44:57.183211 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-01-10 13:44:57.183216 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-10 13:44:57.183220 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:57.183225 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.183229 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.183233 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:57.183238 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:57.183242 | orchestrator | + name = "testbed-volume-5-node-base" 2026-01-10 13:44:57.183246 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.183251 | orchestrator | + size = 80 2026-01-10 13:44:57.183255 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:57.183259 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:57.183264 | orchestrator | } 2026-01-10 13:44:57.183346 | orchestrator | 2026-01-10 13:44:57.183360 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-01-10 13:44:57.183365 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-10 13:44:57.183370 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:57.183374 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.183378 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.183383 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:57.183387 | orchestrator | + name = "testbed-volume-0-node-3" 2026-01-10 13:44:57.183391 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.183396 | orchestrator | + size = 20 2026-01-10 13:44:57.183400 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:57.183404 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:57.183409 | orchestrator | } 2026-01-10 13:44:57.183473 | orchestrator | 2026-01-10 13:44:57.183485 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-01-10 13:44:57.183490 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-10 13:44:57.183495 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:57.183499 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.183503 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.183508 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:57.183512 | orchestrator | + name = "testbed-volume-1-node-4" 2026-01-10 13:44:57.183516 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.183521 | orchestrator | + size = 20 2026-01-10 13:44:57.183525 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:57.183530 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:57.183534 | orchestrator | } 2026-01-10 13:44:57.183601 | orchestrator | 2026-01-10 13:44:57.183614 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-01-10 13:44:57.183619 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-10 13:44:57.183624 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:57.183628 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.183632 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.183637 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:57.183641 | orchestrator | + name = "testbed-volume-2-node-5" 2026-01-10 13:44:57.183645 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.183654 | orchestrator | + size = 20 2026-01-10 13:44:57.183658 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:57.183663 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:57.183667 | orchestrator | } 2026-01-10 13:44:57.183733 | orchestrator | 2026-01-10 13:44:57.183746 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-01-10 13:44:57.183751 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-10 13:44:57.183755 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:57.183760 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.183764 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.183769 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:57.183773 | orchestrator | + name = "testbed-volume-3-node-3" 2026-01-10 13:44:57.183777 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.183782 | orchestrator | + size = 20 2026-01-10 13:44:57.183786 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:57.183790 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:57.183795 | orchestrator | } 2026-01-10 13:44:57.183856 | orchestrator | 2026-01-10 13:44:57.183869 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-01-10 13:44:57.183874 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-10 13:44:57.183878 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:57.183883 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.183887 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.183891 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:57.183896 | orchestrator | + name = "testbed-volume-4-node-4" 2026-01-10 13:44:57.183900 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.183908 | orchestrator | + size = 20 2026-01-10 13:44:57.183913 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:57.183917 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:57.183921 | orchestrator | } 2026-01-10 13:44:57.183986 | orchestrator | 2026-01-10 13:44:57.183999 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-01-10 13:44:57.184004 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-10 13:44:57.184009 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:57.184013 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.184017 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.184065 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:57.184071 | orchestrator | + name = "testbed-volume-5-node-5" 2026-01-10 13:44:57.184075 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.184079 | orchestrator | + size = 20 2026-01-10 13:44:57.184083 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:57.184088 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:57.184092 | orchestrator | } 2026-01-10 13:44:57.184170 | orchestrator | 2026-01-10 13:44:57.184183 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-01-10 13:44:57.184188 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-10 13:44:57.184192 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:57.184196 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.184201 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.184205 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:57.184209 | orchestrator | + name = "testbed-volume-6-node-3" 2026-01-10 13:44:57.184214 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.184218 | orchestrator | + size = 20 2026-01-10 13:44:57.184222 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:57.184227 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:57.184231 | orchestrator | } 2026-01-10 13:44:57.184312 | orchestrator | 2026-01-10 13:44:57.184325 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-01-10 13:44:57.184330 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-10 13:44:57.184340 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:57.184344 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.184348 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.184353 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:57.184357 | orchestrator | + name = "testbed-volume-7-node-4" 2026-01-10 13:44:57.184361 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.184366 | orchestrator | + size = 20 2026-01-10 13:44:57.184370 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:57.184375 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:57.184379 | orchestrator | } 2026-01-10 13:44:57.184448 | orchestrator | 2026-01-10 13:44:57.184461 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-01-10 13:44:57.184466 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-10 13:44:57.184471 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:57.184475 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.184479 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.184484 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:57.184488 | orchestrator | + name = "testbed-volume-8-node-5" 2026-01-10 13:44:57.184492 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.184497 | orchestrator | + size = 20 2026-01-10 13:44:57.184501 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:57.184505 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:57.184509 | orchestrator | } 2026-01-10 13:44:57.184730 | orchestrator | 2026-01-10 13:44:57.184744 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-01-10 13:44:57.184748 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-01-10 13:44:57.184753 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-10 13:44:57.184757 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-10 13:44:57.184762 | orchestrator | + all_metadata = (known after apply) 2026-01-10 13:44:57.184766 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:57.184770 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.184775 | orchestrator | + config_drive = true 2026-01-10 13:44:57.184779 | orchestrator | + created = (known after apply) 2026-01-10 13:44:57.184783 | orchestrator | + flavor_id = (known after apply) 2026-01-10 13:44:57.184788 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-01-10 13:44:57.184792 | orchestrator | + force_delete = false 2026-01-10 13:44:57.184796 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-10 13:44:57.184801 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.184805 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:57.184810 | orchestrator | + image_name = (known after apply) 2026-01-10 13:44:57.184814 | orchestrator | + key_pair = "testbed" 2026-01-10 13:44:57.184818 | orchestrator | + name = "testbed-manager" 2026-01-10 13:44:57.184823 | orchestrator | + power_state = "active" 2026-01-10 13:44:57.184827 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.184831 | orchestrator | + security_groups = (known after apply) 2026-01-10 13:44:57.184836 | orchestrator | + stop_before_destroy = false 2026-01-10 13:44:57.184840 | orchestrator | + updated = (known after apply) 2026-01-10 13:44:57.184844 | orchestrator | + user_data = (sensitive value) 2026-01-10 13:44:57.184849 | orchestrator | 2026-01-10 13:44:57.184854 | orchestrator | + block_device { 2026-01-10 13:44:57.184858 | orchestrator | + boot_index = 0 2026-01-10 13:44:57.184863 | orchestrator | + delete_on_termination = false 2026-01-10 13:44:57.184871 | orchestrator | + destination_type = "volume" 2026-01-10 13:44:57.184875 | orchestrator | + multiattach = false 2026-01-10 13:44:57.184880 | orchestrator | + source_type = "volume" 2026-01-10 13:44:57.184884 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:57.184893 | orchestrator | } 2026-01-10 13:44:57.184898 | orchestrator | 2026-01-10 13:44:57.184902 | orchestrator | + network { 2026-01-10 13:44:57.184907 | orchestrator | + access_network = false 2026-01-10 13:44:57.184911 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-10 13:44:57.184915 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-10 13:44:57.184920 | orchestrator | + mac = (known after apply) 2026-01-10 13:44:57.184924 | orchestrator | + name = (known after apply) 2026-01-10 13:44:57.184928 | orchestrator | + port = (known after apply) 2026-01-10 13:44:57.184933 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:57.184937 | orchestrator | } 2026-01-10 13:44:57.184942 | orchestrator | } 2026-01-10 13:44:57.185149 | orchestrator | 2026-01-10 13:44:57.185163 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-01-10 13:44:57.185168 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-10 13:44:57.185172 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-10 13:44:57.185176 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-10 13:44:57.185181 | orchestrator | + all_metadata = (known after apply) 2026-01-10 13:44:57.185185 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:57.185189 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.185194 | orchestrator | + config_drive = true 2026-01-10 13:44:57.185198 | orchestrator | + created = (known after apply) 2026-01-10 13:44:57.185202 | orchestrator | + flavor_id = (known after apply) 2026-01-10 13:44:57.185207 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-10 13:44:57.185211 | orchestrator | + force_delete = false 2026-01-10 13:44:57.185216 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-10 13:44:57.185220 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.185224 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:57.185229 | orchestrator | + image_name = (known after apply) 2026-01-10 13:44:57.185233 | orchestrator | + key_pair = "testbed" 2026-01-10 13:44:57.185237 | orchestrator | + name = "testbed-node-0" 2026-01-10 13:44:57.185242 | orchestrator | + power_state = "active" 2026-01-10 13:44:57.185246 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.185250 | orchestrator | + security_groups = (known after apply) 2026-01-10 13:44:57.185255 | orchestrator | + stop_before_destroy = false 2026-01-10 13:44:57.185259 | orchestrator | + updated = (known after apply) 2026-01-10 13:44:57.185263 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-10 13:44:57.185268 | orchestrator | 2026-01-10 13:44:57.185272 | orchestrator | + block_device { 2026-01-10 13:44:57.185277 | orchestrator | + boot_index = 0 2026-01-10 13:44:57.185281 | orchestrator | + delete_on_termination = false 2026-01-10 13:44:57.185285 | orchestrator | + destination_type = "volume" 2026-01-10 13:44:57.185327 | orchestrator | + multiattach = false 2026-01-10 13:44:57.185331 | orchestrator | + source_type = "volume" 2026-01-10 13:44:57.185336 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:57.185340 | orchestrator | } 2026-01-10 13:44:57.185344 | orchestrator | 2026-01-10 13:44:57.185349 | orchestrator | + network { 2026-01-10 13:44:57.185353 | orchestrator | + access_network = false 2026-01-10 13:44:57.185357 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-10 13:44:57.185362 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-10 13:44:57.185366 | orchestrator | + mac = (known after apply) 2026-01-10 13:44:57.185370 | orchestrator | + name = (known after apply) 2026-01-10 13:44:57.185375 | orchestrator | + port = (known after apply) 2026-01-10 13:44:57.185379 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:57.185383 | orchestrator | } 2026-01-10 13:44:57.185387 | orchestrator | } 2026-01-10 13:44:57.185593 | orchestrator | 2026-01-10 13:44:57.185607 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-01-10 13:44:57.185612 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-10 13:44:57.185616 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-10 13:44:57.185625 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-10 13:44:57.185629 | orchestrator | + all_metadata = (known after apply) 2026-01-10 13:44:57.185633 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:57.185637 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.185641 | orchestrator | + config_drive = true 2026-01-10 13:44:57.185645 | orchestrator | + created = (known after apply) 2026-01-10 13:44:57.185649 | orchestrator | + flavor_id = (known after apply) 2026-01-10 13:44:57.185653 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-10 13:44:57.185657 | orchestrator | + force_delete = false 2026-01-10 13:44:57.185661 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-10 13:44:57.185666 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.185670 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:57.185674 | orchestrator | + image_name = (known after apply) 2026-01-10 13:44:57.185678 | orchestrator | + key_pair = "testbed" 2026-01-10 13:44:57.185682 | orchestrator | + name = "testbed-node-1" 2026-01-10 13:44:57.185686 | orchestrator | + power_state = "active" 2026-01-10 13:44:57.185690 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.185694 | orchestrator | + security_groups = (known after apply) 2026-01-10 13:44:57.185698 | orchestrator | + stop_before_destroy = false 2026-01-10 13:44:57.185702 | orchestrator | + updated = (known after apply) 2026-01-10 13:44:57.185706 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-10 13:44:57.185711 | orchestrator | 2026-01-10 13:44:57.185715 | orchestrator | + block_device { 2026-01-10 13:44:57.185719 | orchestrator | + boot_index = 0 2026-01-10 13:44:57.185723 | orchestrator | + delete_on_termination = false 2026-01-10 13:44:57.185727 | orchestrator | + destination_type = "volume" 2026-01-10 13:44:57.185731 | orchestrator | + multiattach = false 2026-01-10 13:44:57.185735 | orchestrator | + source_type = "volume" 2026-01-10 13:44:57.185739 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:57.185743 | orchestrator | } 2026-01-10 13:44:57.185747 | orchestrator | 2026-01-10 13:44:57.185752 | orchestrator | + network { 2026-01-10 13:44:57.185756 | orchestrator | + access_network = false 2026-01-10 13:44:57.185760 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-10 13:44:57.185764 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-10 13:44:57.185768 | orchestrator | + mac = (known after apply) 2026-01-10 13:44:57.185772 | orchestrator | + name = (known after apply) 2026-01-10 13:44:57.185776 | orchestrator | + port = (known after apply) 2026-01-10 13:44:57.185780 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:57.185784 | orchestrator | } 2026-01-10 13:44:57.185788 | orchestrator | } 2026-01-10 13:44:57.185998 | orchestrator | 2026-01-10 13:44:57.186036 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-01-10 13:44:57.186042 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-10 13:44:57.186047 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-10 13:44:57.186051 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-10 13:44:57.186056 | orchestrator | + all_metadata = (known after apply) 2026-01-10 13:44:57.186060 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:57.186072 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.186076 | orchestrator | + config_drive = true 2026-01-10 13:44:57.186080 | orchestrator | + created = (known after apply) 2026-01-10 13:44:57.186085 | orchestrator | + flavor_id = (known after apply) 2026-01-10 13:44:57.186089 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-10 13:44:57.186093 | orchestrator | + force_delete = false 2026-01-10 13:44:57.186097 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-10 13:44:57.186101 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.186105 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:57.186113 | orchestrator | + image_name = (known after apply) 2026-01-10 13:44:57.186118 | orchestrator | + key_pair = "testbed" 2026-01-10 13:44:57.186122 | orchestrator | + name = "testbed-node-2" 2026-01-10 13:44:57.186126 | orchestrator | + power_state = "active" 2026-01-10 13:44:57.186130 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.186134 | orchestrator | + security_groups = (known after apply) 2026-01-10 13:44:57.186138 | orchestrator | + stop_before_destroy = false 2026-01-10 13:44:57.186142 | orchestrator | + updated = (known after apply) 2026-01-10 13:44:57.186147 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-10 13:44:57.186151 | orchestrator | 2026-01-10 13:44:57.186155 | orchestrator | + block_device { 2026-01-10 13:44:57.186159 | orchestrator | + boot_index = 0 2026-01-10 13:44:57.186163 | orchestrator | + delete_on_termination = false 2026-01-10 13:44:57.186167 | orchestrator | + destination_type = "volume" 2026-01-10 13:44:57.186171 | orchestrator | + multiattach = false 2026-01-10 13:44:57.186175 | orchestrator | + source_type = "volume" 2026-01-10 13:44:57.186180 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:57.186184 | orchestrator | } 2026-01-10 13:44:57.186188 | orchestrator | 2026-01-10 13:44:57.186192 | orchestrator | + network { 2026-01-10 13:44:57.186196 | orchestrator | + access_network = false 2026-01-10 13:44:57.186201 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-10 13:44:57.186205 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-10 13:44:57.186209 | orchestrator | + mac = (known after apply) 2026-01-10 13:44:57.186213 | orchestrator | + name = (known after apply) 2026-01-10 13:44:57.186217 | orchestrator | + port = (known after apply) 2026-01-10 13:44:57.186221 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:57.186225 | orchestrator | } 2026-01-10 13:44:57.186230 | orchestrator | } 2026-01-10 13:44:57.186441 | orchestrator | 2026-01-10 13:44:57.186455 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-01-10 13:44:57.186460 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-10 13:44:57.186464 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-10 13:44:57.186468 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-10 13:44:57.186472 | orchestrator | + all_metadata = (known after apply) 2026-01-10 13:44:57.186476 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:57.186480 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.186485 | orchestrator | + config_drive = true 2026-01-10 13:44:57.186489 | orchestrator | + created = (known after apply) 2026-01-10 13:44:57.186493 | orchestrator | + flavor_id = (known after apply) 2026-01-10 13:44:57.186497 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-10 13:44:57.186501 | orchestrator | + force_delete = false 2026-01-10 13:44:57.186505 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-10 13:44:57.186510 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.186514 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:57.186518 | orchestrator | + image_name = (known after apply) 2026-01-10 13:44:57.186522 | orchestrator | + key_pair = "testbed" 2026-01-10 13:44:57.186526 | orchestrator | + name = "testbed-node-3" 2026-01-10 13:44:57.186530 | orchestrator | + power_state = "active" 2026-01-10 13:44:57.186535 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.186539 | orchestrator | + security_groups = (known after apply) 2026-01-10 13:44:57.186543 | orchestrator | + stop_before_destroy = false 2026-01-10 13:44:57.186547 | orchestrator | + updated = (known after apply) 2026-01-10 13:44:57.186551 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-10 13:44:57.186555 | orchestrator | 2026-01-10 13:44:57.186560 | orchestrator | + block_device { 2026-01-10 13:44:57.186567 | orchestrator | + boot_index = 0 2026-01-10 13:44:57.186571 | orchestrator | + delete_on_termination = false 2026-01-10 13:44:57.186575 | orchestrator | + destination_type = "volume" 2026-01-10 13:44:57.186584 | orchestrator | + multiattach = false 2026-01-10 13:44:57.186588 | orchestrator | + source_type = "volume" 2026-01-10 13:44:57.186593 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:57.186597 | orchestrator | } 2026-01-10 13:44:57.186601 | orchestrator | 2026-01-10 13:44:57.186605 | orchestrator | + network { 2026-01-10 13:44:57.186609 | orchestrator | + access_network = false 2026-01-10 13:44:57.186613 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-10 13:44:57.186617 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-10 13:44:57.186621 | orchestrator | + mac = (known after apply) 2026-01-10 13:44:57.186625 | orchestrator | + name = (known after apply) 2026-01-10 13:44:57.186629 | orchestrator | + port = (known after apply) 2026-01-10 13:44:57.186633 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:57.186637 | orchestrator | } 2026-01-10 13:44:57.186642 | orchestrator | } 2026-01-10 13:44:57.186834 | orchestrator | 2026-01-10 13:44:57.186846 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-01-10 13:44:57.186851 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-10 13:44:57.186855 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-10 13:44:57.186860 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-10 13:44:57.186864 | orchestrator | + all_metadata = (known after apply) 2026-01-10 13:44:57.186868 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:57.186872 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.186876 | orchestrator | + config_drive = true 2026-01-10 13:44:57.186880 | orchestrator | + created = (known after apply) 2026-01-10 13:44:57.186884 | orchestrator | + flavor_id = (known after apply) 2026-01-10 13:44:57.186889 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-10 13:44:57.186893 | orchestrator | + force_delete = false 2026-01-10 13:44:57.186897 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-10 13:44:57.186901 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.186905 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:57.186909 | orchestrator | + image_name = (known after apply) 2026-01-10 13:44:57.186913 | orchestrator | + key_pair = "testbed" 2026-01-10 13:44:57.186918 | orchestrator | + name = "testbed-node-4" 2026-01-10 13:44:57.186922 | orchestrator | + power_state = "active" 2026-01-10 13:44:57.186926 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.186930 | orchestrator | + security_groups = (known after apply) 2026-01-10 13:44:57.186934 | orchestrator | + stop_before_destroy = false 2026-01-10 13:44:57.186938 | orchestrator | + updated = (known after apply) 2026-01-10 13:44:57.186942 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-10 13:44:57.186947 | orchestrator | 2026-01-10 13:44:57.186951 | orchestrator | + block_device { 2026-01-10 13:44:57.186955 | orchestrator | + boot_index = 0 2026-01-10 13:44:57.186959 | orchestrator | + delete_on_termination = false 2026-01-10 13:44:57.186963 | orchestrator | + destination_type = "volume" 2026-01-10 13:44:57.186967 | orchestrator | + multiattach = false 2026-01-10 13:44:57.186972 | orchestrator | + source_type = "volume" 2026-01-10 13:44:57.186976 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:57.186980 | orchestrator | } 2026-01-10 13:44:57.186984 | orchestrator | 2026-01-10 13:44:57.186988 | orchestrator | + network { 2026-01-10 13:44:57.186992 | orchestrator | + access_network = false 2026-01-10 13:44:57.186996 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-10 13:44:57.187001 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-10 13:44:57.187005 | orchestrator | + mac = (known after apply) 2026-01-10 13:44:57.187009 | orchestrator | + name = (known after apply) 2026-01-10 13:44:57.187013 | orchestrator | + port = (known after apply) 2026-01-10 13:44:57.187017 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:57.187021 | orchestrator | } 2026-01-10 13:44:57.187026 | orchestrator | } 2026-01-10 13:44:57.187227 | orchestrator | 2026-01-10 13:44:57.187240 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-01-10 13:44:57.187245 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-10 13:44:57.187249 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-10 13:44:57.187253 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-10 13:44:57.187257 | orchestrator | + all_metadata = (known after apply) 2026-01-10 13:44:57.187261 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:57.187266 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.187270 | orchestrator | + config_drive = true 2026-01-10 13:44:57.187274 | orchestrator | + created = (known after apply) 2026-01-10 13:44:57.187278 | orchestrator | + flavor_id = (known after apply) 2026-01-10 13:44:57.187282 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-10 13:44:57.187296 | orchestrator | + force_delete = false 2026-01-10 13:44:57.187304 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-10 13:44:57.187308 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.187312 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:57.187316 | orchestrator | + image_name = (known after apply) 2026-01-10 13:44:57.187320 | orchestrator | + key_pair = "testbed" 2026-01-10 13:44:57.187324 | orchestrator | + name = "testbed-node-5" 2026-01-10 13:44:57.187328 | orchestrator | + power_state = "active" 2026-01-10 13:44:57.187333 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.187337 | orchestrator | + security_groups = (known after apply) 2026-01-10 13:44:57.187341 | orchestrator | + stop_before_destroy = false 2026-01-10 13:44:57.187345 | orchestrator | + updated = (known after apply) 2026-01-10 13:44:57.187349 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-10 13:44:57.187353 | orchestrator | 2026-01-10 13:44:57.187357 | orchestrator | + block_device { 2026-01-10 13:44:57.187361 | orchestrator | + boot_index = 0 2026-01-10 13:44:57.187365 | orchestrator | + delete_on_termination = false 2026-01-10 13:44:57.187369 | orchestrator | + destination_type = "volume" 2026-01-10 13:44:57.187373 | orchestrator | + multiattach = false 2026-01-10 13:44:57.187378 | orchestrator | + source_type = "volume" 2026-01-10 13:44:57.187382 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:57.187386 | orchestrator | } 2026-01-10 13:44:57.187390 | orchestrator | 2026-01-10 13:44:57.187394 | orchestrator | + network { 2026-01-10 13:44:57.187398 | orchestrator | + access_network = false 2026-01-10 13:44:57.187402 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-10 13:44:57.187406 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-10 13:44:57.187410 | orchestrator | + mac = (known after apply) 2026-01-10 13:44:57.187415 | orchestrator | + name = (known after apply) 2026-01-10 13:44:57.187419 | orchestrator | + port = (known after apply) 2026-01-10 13:44:57.187423 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:57.187427 | orchestrator | } 2026-01-10 13:44:57.187431 | orchestrator | } 2026-01-10 13:44:57.187482 | orchestrator | 2026-01-10 13:44:57.187495 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-01-10 13:44:57.187500 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-01-10 13:44:57.187504 | orchestrator | + fingerprint = (known after apply) 2026-01-10 13:44:57.187508 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.187513 | orchestrator | + name = "testbed" 2026-01-10 13:44:57.187517 | orchestrator | + private_key = (sensitive value) 2026-01-10 13:44:57.187521 | orchestrator | + public_key = (known after apply) 2026-01-10 13:44:57.187525 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.187530 | orchestrator | + user_id = (known after apply) 2026-01-10 13:44:57.187534 | orchestrator | } 2026-01-10 13:44:57.187577 | orchestrator | 2026-01-10 13:44:57.187590 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-01-10 13:44:57.187595 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-10 13:44:57.187603 | orchestrator | + device = (known after apply) 2026-01-10 13:44:57.187607 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.187612 | orchestrator | + instance_id = (known after apply) 2026-01-10 13:44:57.187616 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.187620 | orchestrator | + volume_id = (known after apply) 2026-01-10 13:44:57.187624 | orchestrator | } 2026-01-10 13:44:57.187665 | orchestrator | 2026-01-10 13:44:57.187677 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-01-10 13:44:57.187681 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-10 13:44:57.187686 | orchestrator | + device = (known after apply) 2026-01-10 13:44:57.187690 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.187694 | orchestrator | + instance_id = (known after apply) 2026-01-10 13:44:57.187698 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.187702 | orchestrator | + volume_id = (known after apply) 2026-01-10 13:44:57.187707 | orchestrator | } 2026-01-10 13:44:57.187748 | orchestrator | 2026-01-10 13:44:57.187762 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-01-10 13:44:57.187767 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-10 13:44:57.187771 | orchestrator | + device = (known after apply) 2026-01-10 13:44:57.187775 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.187779 | orchestrator | + instance_id = (known after apply) 2026-01-10 13:44:57.187783 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.187787 | orchestrator | + volume_id = (known after apply) 2026-01-10 13:44:57.187792 | orchestrator | } 2026-01-10 13:44:57.187832 | orchestrator | 2026-01-10 13:44:57.187844 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-01-10 13:44:57.187848 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-10 13:44:57.187853 | orchestrator | + device = (known after apply) 2026-01-10 13:44:57.187857 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.187861 | orchestrator | + instance_id = (known after apply) 2026-01-10 13:44:57.187865 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.187869 | orchestrator | + volume_id = (known after apply) 2026-01-10 13:44:57.187873 | orchestrator | } 2026-01-10 13:44:57.187912 | orchestrator | 2026-01-10 13:44:57.187924 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-01-10 13:44:57.187928 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-10 13:44:57.187933 | orchestrator | + device = (known after apply) 2026-01-10 13:44:57.187937 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.187941 | orchestrator | + instance_id = (known after apply) 2026-01-10 13:44:57.187948 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.187953 | orchestrator | + volume_id = (known after apply) 2026-01-10 13:44:57.187957 | orchestrator | } 2026-01-10 13:44:57.187994 | orchestrator | 2026-01-10 13:44:57.188006 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-01-10 13:44:57.188011 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-10 13:44:57.188015 | orchestrator | + device = (known after apply) 2026-01-10 13:44:57.188019 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.188023 | orchestrator | + instance_id = (known after apply) 2026-01-10 13:44:57.188027 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.188032 | orchestrator | + volume_id = (known after apply) 2026-01-10 13:44:57.188036 | orchestrator | } 2026-01-10 13:44:57.188080 | orchestrator | 2026-01-10 13:44:57.188093 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-01-10 13:44:57.188098 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-10 13:44:57.188102 | orchestrator | + device = (known after apply) 2026-01-10 13:44:57.188106 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.188111 | orchestrator | + instance_id = (known after apply) 2026-01-10 13:44:57.188115 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.188123 | orchestrator | + volume_id = (known after apply) 2026-01-10 13:44:57.188127 | orchestrator | } 2026-01-10 13:44:57.188166 | orchestrator | 2026-01-10 13:44:57.188178 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-01-10 13:44:57.188183 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-10 13:44:57.188187 | orchestrator | + device = (known after apply) 2026-01-10 13:44:57.188191 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.188196 | orchestrator | + instance_id = (known after apply) 2026-01-10 13:44:57.188200 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.188204 | orchestrator | + volume_id = (known after apply) 2026-01-10 13:44:57.188208 | orchestrator | } 2026-01-10 13:44:57.188245 | orchestrator | 2026-01-10 13:44:57.188257 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-01-10 13:44:57.188262 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-10 13:44:57.188266 | orchestrator | + device = (known after apply) 2026-01-10 13:44:57.188270 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.188274 | orchestrator | + instance_id = (known after apply) 2026-01-10 13:44:57.188278 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.188282 | orchestrator | + volume_id = (known after apply) 2026-01-10 13:44:57.188321 | orchestrator | } 2026-01-10 13:44:57.188366 | orchestrator | 2026-01-10 13:44:57.188378 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-01-10 13:44:57.188383 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-01-10 13:44:57.188387 | orchestrator | + fixed_ip = (known after apply) 2026-01-10 13:44:57.188392 | orchestrator | + floating_ip = (known after apply) 2026-01-10 13:44:57.188396 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.188400 | orchestrator | + port_id = (known after apply) 2026-01-10 13:44:57.188404 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.188408 | orchestrator | } 2026-01-10 13:44:57.188486 | orchestrator | 2026-01-10 13:44:57.188499 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-01-10 13:44:57.188503 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-01-10 13:44:57.188507 | orchestrator | + address = (known after apply) 2026-01-10 13:44:57.188512 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:57.188516 | orchestrator | + dns_domain = (known after apply) 2026-01-10 13:44:57.188520 | orchestrator | + dns_name = (known after apply) 2026-01-10 13:44:57.188524 | orchestrator | + fixed_ip = (known after apply) 2026-01-10 13:44:57.188528 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.188532 | orchestrator | + pool = "public" 2026-01-10 13:44:57.188536 | orchestrator | + port_id = (known after apply) 2026-01-10 13:44:57.188541 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.188545 | orchestrator | + subnet_id = (known after apply) 2026-01-10 13:44:57.188548 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.188552 | orchestrator | } 2026-01-10 13:44:57.188651 | orchestrator | 2026-01-10 13:44:57.188663 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-01-10 13:44:57.188667 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-01-10 13:44:57.188671 | orchestrator | + admin_state_up = (known after apply) 2026-01-10 13:44:57.188674 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:57.188678 | orchestrator | + availability_zone_hints = [ 2026-01-10 13:44:57.188682 | orchestrator | + "nova", 2026-01-10 13:44:57.188686 | orchestrator | ] 2026-01-10 13:44:57.188690 | orchestrator | + dns_domain = (known after apply) 2026-01-10 13:44:57.188694 | orchestrator | + external = (known after apply) 2026-01-10 13:44:57.188698 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.188702 | orchestrator | + mtu = (known after apply) 2026-01-10 13:44:57.188705 | orchestrator | + name = "net-testbed-management" 2026-01-10 13:44:57.188709 | orchestrator | + port_security_enabled = (known after apply) 2026-01-10 13:44:57.188717 | orchestrator | + qos_policy_id = (known after apply) 2026-01-10 13:44:57.188721 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.188725 | orchestrator | + shared = (known after apply) 2026-01-10 13:44:57.188729 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.188733 | orchestrator | + transparent_vlan = (known after apply) 2026-01-10 13:44:57.188736 | orchestrator | 2026-01-10 13:44:57.188740 | orchestrator | + segments (known after apply) 2026-01-10 13:44:57.188744 | orchestrator | } 2026-01-10 13:44:57.188865 | orchestrator | 2026-01-10 13:44:57.188877 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-01-10 13:44:57.188881 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-01-10 13:44:57.188885 | orchestrator | + admin_state_up = (known after apply) 2026-01-10 13:44:57.188889 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-10 13:44:57.188893 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-10 13:44:57.188900 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:57.188904 | orchestrator | + device_id = (known after apply) 2026-01-10 13:44:57.188907 | orchestrator | + device_owner = (known after apply) 2026-01-10 13:44:57.188911 | orchestrator | + dns_assignment = (known after apply) 2026-01-10 13:44:57.188915 | orchestrator | + dns_name = (known after apply) 2026-01-10 13:44:57.188919 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.188922 | orchestrator | + mac_address = (known after apply) 2026-01-10 13:44:57.188926 | orchestrator | + network_id = (known after apply) 2026-01-10 13:44:57.188930 | orchestrator | + port_security_enabled = (known after apply) 2026-01-10 13:44:57.188933 | orchestrator | + qos_policy_id = (known after apply) 2026-01-10 13:44:57.188937 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.188941 | orchestrator | + security_group_ids = (known after apply) 2026-01-10 13:44:57.188944 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.188948 | orchestrator | 2026-01-10 13:44:57.188952 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:57.188956 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-10 13:44:57.188960 | orchestrator | } 2026-01-10 13:44:57.188963 | orchestrator | 2026-01-10 13:44:57.188967 | orchestrator | + binding (known after apply) 2026-01-10 13:44:57.188971 | orchestrator | 2026-01-10 13:44:57.188975 | orchestrator | + fixed_ip { 2026-01-10 13:44:57.188978 | orchestrator | + ip_address = "192.168.16.5" 2026-01-10 13:44:57.188982 | orchestrator | + subnet_id = (known after apply) 2026-01-10 13:44:57.188986 | orchestrator | } 2026-01-10 13:44:57.188990 | orchestrator | } 2026-01-10 13:44:57.189122 | orchestrator | 2026-01-10 13:44:57.189134 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-01-10 13:44:57.189138 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-10 13:44:57.189142 | orchestrator | + admin_state_up = (known after apply) 2026-01-10 13:44:57.189146 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-10 13:44:57.189150 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-10 13:44:57.189153 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:57.189157 | orchestrator | + device_id = (known after apply) 2026-01-10 13:44:57.189161 | orchestrator | + device_owner = (known after apply) 2026-01-10 13:44:57.189165 | orchestrator | + dns_assignment = (known after apply) 2026-01-10 13:44:57.189169 | orchestrator | + dns_name = (known after apply) 2026-01-10 13:44:57.189172 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.189176 | orchestrator | + mac_address = (known after apply) 2026-01-10 13:44:57.189180 | orchestrator | + network_id = (known after apply) 2026-01-10 13:44:57.189184 | orchestrator | + port_security_enabled = (known after apply) 2026-01-10 13:44:57.189188 | orchestrator | + qos_policy_id = (known after apply) 2026-01-10 13:44:57.189191 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.189200 | orchestrator | + security_group_ids = (known after apply) 2026-01-10 13:44:57.189204 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.189208 | orchestrator | 2026-01-10 13:44:57.189211 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:57.189215 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-10 13:44:57.189219 | orchestrator | } 2026-01-10 13:44:57.189223 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:57.189227 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-10 13:44:57.189231 | orchestrator | } 2026-01-10 13:44:57.189234 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:57.189238 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-10 13:44:57.189242 | orchestrator | } 2026-01-10 13:44:57.189246 | orchestrator | 2026-01-10 13:44:57.189250 | orchestrator | + binding (known after apply) 2026-01-10 13:44:57.189253 | orchestrator | 2026-01-10 13:44:57.189257 | orchestrator | + fixed_ip { 2026-01-10 13:44:57.189261 | orchestrator | + ip_address = "192.168.16.10" 2026-01-10 13:44:57.189265 | orchestrator | + subnet_id = (known after apply) 2026-01-10 13:44:57.189269 | orchestrator | } 2026-01-10 13:44:57.189273 | orchestrator | } 2026-01-10 13:44:57.189435 | orchestrator | 2026-01-10 13:44:57.189449 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-01-10 13:44:57.189453 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-10 13:44:57.189457 | orchestrator | + admin_state_up = (known after apply) 2026-01-10 13:44:57.189461 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-10 13:44:57.189465 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-10 13:44:57.189468 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:57.189472 | orchestrator | + device_id = (known after apply) 2026-01-10 13:44:57.189481 | orchestrator | + device_owner = (known after apply) 2026-01-10 13:44:57.189485 | orchestrator | + dns_assignment = (known after apply) 2026-01-10 13:44:57.189489 | orchestrator | + dns_name = (known after apply) 2026-01-10 13:44:57.189492 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.189496 | orchestrator | + mac_address = (known after apply) 2026-01-10 13:44:57.189500 | orchestrator | + network_id = (known after apply) 2026-01-10 13:44:57.189504 | orchestrator | + port_security_enabled = (known after apply) 2026-01-10 13:44:57.189507 | orchestrator | + qos_policy_id = (known after apply) 2026-01-10 13:44:57.189511 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.189515 | orchestrator | + security_group_ids = (known after apply) 2026-01-10 13:44:57.189519 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.189523 | orchestrator | 2026-01-10 13:44:57.189526 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:57.189530 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-10 13:44:57.189534 | orchestrator | } 2026-01-10 13:44:57.189538 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:57.189542 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-10 13:44:57.189545 | orchestrator | } 2026-01-10 13:44:57.189549 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:57.189553 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-10 13:44:57.189557 | orchestrator | } 2026-01-10 13:44:57.189561 | orchestrator | 2026-01-10 13:44:57.189564 | orchestrator | + binding (known after apply) 2026-01-10 13:44:57.189568 | orchestrator | 2026-01-10 13:44:57.189572 | orchestrator | + fixed_ip { 2026-01-10 13:44:57.189576 | orchestrator | + ip_address = "192.168.16.11" 2026-01-10 13:44:57.189580 | orchestrator | + subnet_id = (known after apply) 2026-01-10 13:44:57.189583 | orchestrator | } 2026-01-10 13:44:57.189587 | orchestrator | } 2026-01-10 13:44:57.189731 | orchestrator | 2026-01-10 13:44:57.189744 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-01-10 13:44:57.189748 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-10 13:44:57.189752 | orchestrator | + admin_state_up = (known after apply) 2026-01-10 13:44:57.189756 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-10 13:44:57.189760 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-10 13:44:57.189763 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:57.189772 | orchestrator | + device_id = (known after apply) 2026-01-10 13:44:57.189776 | orchestrator | + device_owner = (known after apply) 2026-01-10 13:44:57.189779 | orchestrator | + dns_assignment = (known after apply) 2026-01-10 13:44:57.189783 | orchestrator | + dns_name = (known after apply) 2026-01-10 13:44:57.189790 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.189793 | orchestrator | + mac_address = (known after apply) 2026-01-10 13:44:57.189797 | orchestrator | + network_id = (known after apply) 2026-01-10 13:44:57.189801 | orchestrator | + port_security_enabled = (known after apply) 2026-01-10 13:44:57.189805 | orchestrator | + qos_policy_id = (known after apply) 2026-01-10 13:44:57.189808 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.189812 | orchestrator | + security_group_ids = (known after apply) 2026-01-10 13:44:57.189816 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.189819 | orchestrator | 2026-01-10 13:44:57.189823 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:57.189827 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-10 13:44:57.189831 | orchestrator | } 2026-01-10 13:44:57.189834 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:57.189838 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-10 13:44:57.189842 | orchestrator | } 2026-01-10 13:44:57.189846 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:57.189849 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-10 13:44:57.189853 | orchestrator | } 2026-01-10 13:44:57.189857 | orchestrator | 2026-01-10 13:44:57.189861 | orchestrator | + binding (known after apply) 2026-01-10 13:44:57.189864 | orchestrator | 2026-01-10 13:44:57.189868 | orchestrator | + fixed_ip { 2026-01-10 13:44:57.189872 | orchestrator | + ip_address = "192.168.16.12" 2026-01-10 13:44:57.189876 | orchestrator | + subnet_id = (known after apply) 2026-01-10 13:44:57.189879 | orchestrator | } 2026-01-10 13:44:57.189883 | orchestrator | } 2026-01-10 13:44:57.190050 | orchestrator | 2026-01-10 13:44:57.190066 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-01-10 13:44:57.190070 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-10 13:44:57.190074 | orchestrator | + admin_state_up = (known after apply) 2026-01-10 13:44:57.190078 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-10 13:44:57.190082 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-10 13:44:57.190086 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:57.190090 | orchestrator | + device_id = (known after apply) 2026-01-10 13:44:57.190094 | orchestrator | + device_owner = (known after apply) 2026-01-10 13:44:57.190098 | orchestrator | + dns_assignment = (known after apply) 2026-01-10 13:44:57.190102 | orchestrator | + dns_name = (known after apply) 2026-01-10 13:44:57.190106 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.190111 | orchestrator | + mac_address = (known after apply) 2026-01-10 13:44:57.190114 | orchestrator | + network_id = (known after apply) 2026-01-10 13:44:57.190118 | orchestrator | + port_security_enabled = (known after apply) 2026-01-10 13:44:57.190122 | orchestrator | + qos_policy_id = (known after apply) 2026-01-10 13:44:57.190126 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.190130 | orchestrator | + security_group_ids = (known after apply) 2026-01-10 13:44:57.190134 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.190137 | orchestrator | 2026-01-10 13:44:57.190141 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:57.190145 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-10 13:44:57.190149 | orchestrator | } 2026-01-10 13:44:57.190153 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:57.190157 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-10 13:44:57.190160 | orchestrator | } 2026-01-10 13:44:57.190164 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:57.190168 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-10 13:44:57.190172 | orchestrator | } 2026-01-10 13:44:57.190175 | orchestrator | 2026-01-10 13:44:57.190184 | orchestrator | + binding (known after apply) 2026-01-10 13:44:57.190188 | orchestrator | 2026-01-10 13:44:57.190192 | orchestrator | + fixed_ip { 2026-01-10 13:44:57.190196 | orchestrator | + ip_address = "192.168.16.13" 2026-01-10 13:44:57.190200 | orchestrator | + subnet_id = (known after apply) 2026-01-10 13:44:57.190203 | orchestrator | } 2026-01-10 13:44:57.190207 | orchestrator | } 2026-01-10 13:44:57.190410 | orchestrator | 2026-01-10 13:44:57.190437 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-01-10 13:44:57.190442 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-10 13:44:57.190446 | orchestrator | + admin_state_up = (known after apply) 2026-01-10 13:44:57.190450 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-10 13:44:57.190453 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-10 13:44:57.190457 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:57.190461 | orchestrator | + device_id = (known after apply) 2026-01-10 13:44:57.190465 | orchestrator | + device_owner = (known after apply) 2026-01-10 13:44:57.190469 | orchestrator | + dns_assignment = (known after apply) 2026-01-10 13:44:57.190472 | orchestrator | + dns_name = (known after apply) 2026-01-10 13:44:57.190476 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.190480 | orchestrator | + mac_address = (known after apply) 2026-01-10 13:44:57.190484 | orchestrator | + network_id = (known after apply) 2026-01-10 13:44:57.190487 | orchestrator | + port_security_enabled = (known after apply) 2026-01-10 13:44:57.190501 | orchestrator | + qos_policy_id = (known after apply) 2026-01-10 13:44:57.190506 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.190509 | orchestrator | + security_group_ids = (known after apply) 2026-01-10 13:44:57.190513 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.190518 | orchestrator | 2026-01-10 13:44:57.190522 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:57.190527 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-10 13:44:57.190531 | orchestrator | } 2026-01-10 13:44:57.190535 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:57.190538 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-10 13:44:57.190542 | orchestrator | } 2026-01-10 13:44:57.190546 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:57.190550 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-10 13:44:57.190553 | orchestrator | } 2026-01-10 13:44:57.190557 | orchestrator | 2026-01-10 13:44:57.190561 | orchestrator | + binding (known after apply) 2026-01-10 13:44:57.190565 | orchestrator | 2026-01-10 13:44:57.190568 | orchestrator | + fixed_ip { 2026-01-10 13:44:57.190572 | orchestrator | + ip_address = "192.168.16.14" 2026-01-10 13:44:57.190576 | orchestrator | + subnet_id = (known after apply) 2026-01-10 13:44:57.190580 | orchestrator | } 2026-01-10 13:44:57.190583 | orchestrator | } 2026-01-10 13:44:57.190786 | orchestrator | 2026-01-10 13:44:57.190800 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-01-10 13:44:57.190804 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-10 13:44:57.190808 | orchestrator | + admin_state_up = (known after apply) 2026-01-10 13:44:57.190812 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-10 13:44:57.190816 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-10 13:44:57.190820 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:57.190832 | orchestrator | + device_id = (known after apply) 2026-01-10 13:44:57.190836 | orchestrator | + device_owner = (known after apply) 2026-01-10 13:44:57.190840 | orchestrator | + dns_assignment = (known after apply) 2026-01-10 13:44:57.190844 | orchestrator | + dns_name = (known after apply) 2026-01-10 13:44:57.190848 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.190852 | orchestrator | + mac_address = (known after apply) 2026-01-10 13:44:57.190855 | orchestrator | + network_id = (known after apply) 2026-01-10 13:44:57.190859 | orchestrator | + port_security_enabled = (known after apply) 2026-01-10 13:44:57.190863 | orchestrator | + qos_policy_id = (known after apply) 2026-01-10 13:44:57.190871 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.190875 | orchestrator | + security_group_ids = (known after apply) 2026-01-10 13:44:57.190879 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.190883 | orchestrator | 2026-01-10 13:44:57.190886 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:57.190890 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-10 13:44:57.190894 | orchestrator | } 2026-01-10 13:44:57.190898 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:57.190902 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-10 13:44:57.190905 | orchestrator | } 2026-01-10 13:44:57.190909 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:57.190913 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-10 13:44:57.190917 | orchestrator | } 2026-01-10 13:44:57.190920 | orchestrator | 2026-01-10 13:44:57.190928 | orchestrator | + binding (known after apply) 2026-01-10 13:44:57.190932 | orchestrator | 2026-01-10 13:44:57.190936 | orchestrator | + fixed_ip { 2026-01-10 13:44:57.190947 | orchestrator | + ip_address = "192.168.16.15" 2026-01-10 13:44:57.190951 | orchestrator | + subnet_id = (known after apply) 2026-01-10 13:44:57.190955 | orchestrator | } 2026-01-10 13:44:57.190959 | orchestrator | } 2026-01-10 13:44:57.191084 | orchestrator | 2026-01-10 13:44:57.191099 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-01-10 13:44:57.191103 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-01-10 13:44:57.191107 | orchestrator | + force_destroy = false 2026-01-10 13:44:57.191111 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.191123 | orchestrator | + port_id = (known after apply) 2026-01-10 13:44:57.191127 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.191131 | orchestrator | + router_id = (known after apply) 2026-01-10 13:44:57.191135 | orchestrator | + subnet_id = (known after apply) 2026-01-10 13:44:57.191139 | orchestrator | } 2026-01-10 13:44:57.191242 | orchestrator | 2026-01-10 13:44:57.191256 | orchestrator | # openstack_networking_router_v2.router will be created 2026-01-10 13:44:57.191263 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-01-10 13:44:57.191269 | orchestrator | + admin_state_up = (known after apply) 2026-01-10 13:44:57.191273 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:57.191276 | orchestrator | + availability_zone_hints = [ 2026-01-10 13:44:57.191280 | orchestrator | + "nova", 2026-01-10 13:44:57.191284 | orchestrator | ] 2026-01-10 13:44:57.191308 | orchestrator | + distributed = (known after apply) 2026-01-10 13:44:57.191312 | orchestrator | + enable_snat = (known after apply) 2026-01-10 13:44:57.191316 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-01-10 13:44:57.191320 | orchestrator | + external_qos_policy_id = (known after apply) 2026-01-10 13:44:57.191324 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.191328 | orchestrator | + name = "testbed" 2026-01-10 13:44:57.191332 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.191335 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.191339 | orchestrator | 2026-01-10 13:44:57.191343 | orchestrator | + external_fixed_ip (known after apply) 2026-01-10 13:44:57.191347 | orchestrator | } 2026-01-10 13:44:57.191448 | orchestrator | 2026-01-10 13:44:57.191460 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-01-10 13:44:57.191465 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-01-10 13:44:57.191469 | orchestrator | + description = "ssh" 2026-01-10 13:44:57.191473 | orchestrator | + direction = "ingress" 2026-01-10 13:44:57.191477 | orchestrator | + ethertype = "IPv4" 2026-01-10 13:44:57.191480 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.191484 | orchestrator | + port_range_max = 22 2026-01-10 13:44:57.191488 | orchestrator | + port_range_min = 22 2026-01-10 13:44:57.191492 | orchestrator | + protocol = "tcp" 2026-01-10 13:44:57.191496 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.191514 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-10 13:44:57.191518 | orchestrator | + remote_group_id = (known after apply) 2026-01-10 13:44:57.191522 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-10 13:44:57.191526 | orchestrator | + security_group_id = (known after apply) 2026-01-10 13:44:57.191530 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.191533 | orchestrator | } 2026-01-10 13:44:57.191629 | orchestrator | 2026-01-10 13:44:57.191641 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-01-10 13:44:57.191646 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-01-10 13:44:57.191649 | orchestrator | + description = "wireguard" 2026-01-10 13:44:57.191653 | orchestrator | + direction = "ingress" 2026-01-10 13:44:57.191657 | orchestrator | + ethertype = "IPv4" 2026-01-10 13:44:57.191661 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.191665 | orchestrator | + port_range_max = 51820 2026-01-10 13:44:57.191668 | orchestrator | + port_range_min = 51820 2026-01-10 13:44:57.191672 | orchestrator | + protocol = "udp" 2026-01-10 13:44:57.191676 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.191680 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-10 13:44:57.191684 | orchestrator | + remote_group_id = (known after apply) 2026-01-10 13:44:57.191696 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-10 13:44:57.191700 | orchestrator | + security_group_id = (known after apply) 2026-01-10 13:44:57.191704 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.191708 | orchestrator | } 2026-01-10 13:44:57.191777 | orchestrator | 2026-01-10 13:44:57.191802 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-01-10 13:44:57.191807 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-01-10 13:44:57.191810 | orchestrator | + direction = "ingress" 2026-01-10 13:44:57.191814 | orchestrator | + ethertype = "IPv4" 2026-01-10 13:44:57.191818 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.191822 | orchestrator | + protocol = "tcp" 2026-01-10 13:44:57.191825 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.191829 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-10 13:44:57.191833 | orchestrator | + remote_group_id = (known after apply) 2026-01-10 13:44:57.191837 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-10 13:44:57.191840 | orchestrator | + security_group_id = (known after apply) 2026-01-10 13:44:57.191844 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.191848 | orchestrator | } 2026-01-10 13:44:57.191959 | orchestrator | 2026-01-10 13:44:57.191974 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-01-10 13:44:57.191978 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-01-10 13:44:57.191982 | orchestrator | + direction = "ingress" 2026-01-10 13:44:57.191985 | orchestrator | + ethertype = "IPv4" 2026-01-10 13:44:57.191989 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.191993 | orchestrator | + protocol = "udp" 2026-01-10 13:44:57.191997 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.192000 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-10 13:44:57.192004 | orchestrator | + remote_group_id = (known after apply) 2026-01-10 13:44:57.192008 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-10 13:44:57.192012 | orchestrator | + security_group_id = (known after apply) 2026-01-10 13:44:57.192015 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.192028 | orchestrator | } 2026-01-10 13:44:57.192097 | orchestrator | 2026-01-10 13:44:57.192121 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-01-10 13:44:57.192131 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-01-10 13:44:57.192135 | orchestrator | + direction = "ingress" 2026-01-10 13:44:57.192139 | orchestrator | + ethertype = "IPv4" 2026-01-10 13:44:57.192142 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.192146 | orchestrator | + protocol = "icmp" 2026-01-10 13:44:57.192150 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.192154 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-10 13:44:57.192157 | orchestrator | + remote_group_id = (known after apply) 2026-01-10 13:44:57.192161 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-10 13:44:57.192165 | orchestrator | + security_group_id = (known after apply) 2026-01-10 13:44:57.192169 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.192173 | orchestrator | } 2026-01-10 13:44:57.192252 | orchestrator | 2026-01-10 13:44:57.192278 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-01-10 13:44:57.192283 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-01-10 13:44:57.192300 | orchestrator | + direction = "ingress" 2026-01-10 13:44:57.192305 | orchestrator | + ethertype = "IPv4" 2026-01-10 13:44:57.192311 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.192317 | orchestrator | + protocol = "tcp" 2026-01-10 13:44:57.192323 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.192329 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-10 13:44:57.192352 | orchestrator | + remote_group_id = (known after apply) 2026-01-10 13:44:57.192357 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-10 13:44:57.192361 | orchestrator | + security_group_id = (known after apply) 2026-01-10 13:44:57.192364 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.192368 | orchestrator | } 2026-01-10 13:44:57.192439 | orchestrator | 2026-01-10 13:44:57.192463 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-01-10 13:44:57.192468 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-01-10 13:44:57.192471 | orchestrator | + direction = "ingress" 2026-01-10 13:44:57.192475 | orchestrator | + ethertype = "IPv4" 2026-01-10 13:44:57.192479 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.192483 | orchestrator | + protocol = "udp" 2026-01-10 13:44:57.192487 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.192490 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-10 13:44:57.192494 | orchestrator | + remote_group_id = (known after apply) 2026-01-10 13:44:57.192498 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-10 13:44:57.192502 | orchestrator | + security_group_id = (known after apply) 2026-01-10 13:44:57.192506 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.192509 | orchestrator | } 2026-01-10 13:44:57.192584 | orchestrator | 2026-01-10 13:44:57.192596 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-01-10 13:44:57.192608 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-01-10 13:44:57.192612 | orchestrator | + direction = "ingress" 2026-01-10 13:44:57.192619 | orchestrator | + ethertype = "IPv4" 2026-01-10 13:44:57.192623 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.192626 | orchestrator | + protocol = "icmp" 2026-01-10 13:44:57.192630 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.192634 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-10 13:44:57.192638 | orchestrator | + remote_group_id = (known after apply) 2026-01-10 13:44:57.192641 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-10 13:44:57.192645 | orchestrator | + security_group_id = (known after apply) 2026-01-10 13:44:57.192649 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.192657 | orchestrator | } 2026-01-10 13:44:57.192730 | orchestrator | 2026-01-10 13:44:57.192755 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-01-10 13:44:57.192760 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-01-10 13:44:57.192764 | orchestrator | + description = "vrrp" 2026-01-10 13:44:57.192767 | orchestrator | + direction = "ingress" 2026-01-10 13:44:57.192771 | orchestrator | + ethertype = "IPv4" 2026-01-10 13:44:57.192775 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.192779 | orchestrator | + protocol = "112" 2026-01-10 13:44:57.192783 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.192786 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-10 13:44:57.192790 | orchestrator | + remote_group_id = (known after apply) 2026-01-10 13:44:57.192794 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-10 13:44:57.192798 | orchestrator | + security_group_id = (known after apply) 2026-01-10 13:44:57.192802 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.192805 | orchestrator | } 2026-01-10 13:44:57.192866 | orchestrator | 2026-01-10 13:44:57.192877 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-01-10 13:44:57.192882 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-01-10 13:44:57.192885 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:57.192889 | orchestrator | + description = "management security group" 2026-01-10 13:44:57.192901 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.192905 | orchestrator | + name = "testbed-management" 2026-01-10 13:44:57.192909 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.192912 | orchestrator | + stateful = (known after apply) 2026-01-10 13:44:57.192916 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.192920 | orchestrator | } 2026-01-10 13:44:57.192973 | orchestrator | 2026-01-10 13:44:57.192984 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-01-10 13:44:57.192989 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-01-10 13:44:57.192992 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:57.193004 | orchestrator | + description = "node security group" 2026-01-10 13:44:57.193008 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.193012 | orchestrator | + name = "testbed-node" 2026-01-10 13:44:57.193016 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.193019 | orchestrator | + stateful = (known after apply) 2026-01-10 13:44:57.193023 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.193027 | orchestrator | } 2026-01-10 13:44:57.193148 | orchestrator | 2026-01-10 13:44:57.193161 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-01-10 13:44:57.193165 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-01-10 13:44:57.193169 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:57.193173 | orchestrator | + cidr = "192.168.16.0/20" 2026-01-10 13:44:57.193176 | orchestrator | + dns_nameservers = [ 2026-01-10 13:44:57.193189 | orchestrator | + "8.8.8.8", 2026-01-10 13:44:57.193193 | orchestrator | + "9.9.9.9", 2026-01-10 13:44:57.193197 | orchestrator | ] 2026-01-10 13:44:57.193200 | orchestrator | + enable_dhcp = true 2026-01-10 13:44:57.193204 | orchestrator | + gateway_ip = (known after apply) 2026-01-10 13:44:57.193208 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.193212 | orchestrator | + ip_version = 4 2026-01-10 13:44:57.193216 | orchestrator | + ipv6_address_mode = (known after apply) 2026-01-10 13:44:57.193220 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-01-10 13:44:57.193226 | orchestrator | + name = "subnet-testbed-management" 2026-01-10 13:44:57.193232 | orchestrator | + network_id = (known after apply) 2026-01-10 13:44:57.193237 | orchestrator | + no_gateway = false 2026-01-10 13:44:57.193243 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.193248 | orchestrator | + service_types = (known after apply) 2026-01-10 13:44:57.193260 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.193265 | orchestrator | 2026-01-10 13:44:57.193271 | orchestrator | + allocation_pool { 2026-01-10 13:44:57.193276 | orchestrator | + end = "192.168.31.250" 2026-01-10 13:44:57.193279 | orchestrator | + start = "192.168.31.200" 2026-01-10 13:44:57.193283 | orchestrator | } 2026-01-10 13:44:57.193300 | orchestrator | } 2026-01-10 13:44:57.193342 | orchestrator | 2026-01-10 13:44:57.193354 | orchestrator | # terraform_data.image will be created 2026-01-10 13:44:57.193358 | orchestrator | + resource "terraform_data" "image" { 2026-01-10 13:44:57.193362 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.193366 | orchestrator | + input = "Ubuntu 24.04" 2026-01-10 13:44:57.193370 | orchestrator | + output = (known after apply) 2026-01-10 13:44:57.193374 | orchestrator | } 2026-01-10 13:44:57.193418 | orchestrator | 2026-01-10 13:44:57.193429 | orchestrator | # terraform_data.image_node will be created 2026-01-10 13:44:57.193434 | orchestrator | + resource "terraform_data" "image_node" { 2026-01-10 13:44:57.193437 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.193441 | orchestrator | + input = "Ubuntu 24.04" 2026-01-10 13:44:57.193445 | orchestrator | + output = (known after apply) 2026-01-10 13:44:57.193449 | orchestrator | } 2026-01-10 13:44:57.193478 | orchestrator | 2026-01-10 13:44:57.193483 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-01-10 13:44:57.193494 | orchestrator | 2026-01-10 13:44:57.193498 | orchestrator | Changes to Outputs: 2026-01-10 13:44:57.193508 | orchestrator | + manager_address = (sensitive value) 2026-01-10 13:44:57.193513 | orchestrator | + private_key = (sensitive value) 2026-01-10 13:44:57.345533 | orchestrator | terraform_data.image: Creating... 2026-01-10 13:44:57.345903 | orchestrator | terraform_data.image: Creation complete after 0s [id=dbb7b6b7-3dfa-2c8e-57d1-13214f5dccfa] 2026-01-10 13:44:57.418098 | orchestrator | terraform_data.image_node: Creating... 2026-01-10 13:44:57.434089 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=32455574-be62-f569-82d8-77e035d17c99] 2026-01-10 13:44:57.434155 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-01-10 13:44:57.434161 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-01-10 13:44:57.434166 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-01-10 13:44:57.448587 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-01-10 13:44:57.448647 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-01-10 13:44:57.448652 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-01-10 13:44:57.448657 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-01-10 13:44:57.448661 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-01-10 13:44:57.448666 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-01-10 13:44:57.456343 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-01-10 13:44:57.882744 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-10 13:44:57.889442 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-01-10 13:44:57.889965 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-10 13:44:57.893944 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-01-10 13:44:58.030776 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-01-10 13:44:58.037745 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-01-10 13:44:58.445570 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=54af1bfb-6ed0-4178-ac41-fbbe0f30e16e] 2026-01-10 13:44:58.453779 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-01-10 13:45:01.097507 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=15b7e91d-d5fd-4068-ac98-0857e3d5fdf2] 2026-01-10 13:45:01.103663 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-01-10 13:45:01.110081 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=573345f2-5167-4bb0-bd40-d392a39279fc] 2026-01-10 13:45:01.115845 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-01-10 13:45:01.121759 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=f0ceade3-8439-47c2-ab29-dba6a2d0af37] 2026-01-10 13:45:01.124844 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-01-10 13:45:01.146317 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=b1ccfdf8-8ebc-4581-b39b-71c057731eea] 2026-01-10 13:45:01.155121 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-01-10 13:45:01.171393 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=7172e707-12eb-4bf8-889d-ca95993faa89] 2026-01-10 13:45:01.174116 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=c3c9ac61-c03e-421c-9f43-37b1f8399a84] 2026-01-10 13:45:01.178409 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-01-10 13:45:01.183096 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-01-10 13:45:01.200471 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=c56b24e3-125a-48ee-acc4-7420ce900c20] 2026-01-10 13:45:01.214418 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-01-10 13:45:01.220308 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=92243198da6751eb835e2cbdcd2caea1a05013cb] 2026-01-10 13:45:01.232607 | orchestrator | local_file.id_rsa_pub: Creating... 2026-01-10 13:45:01.239194 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=e273dec348b6342541a7c65b8ad037ba7886e63d] 2026-01-10 13:45:01.240729 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=09bcf5bb-a136-4209-bef3-37a648ec73be] 2026-01-10 13:45:01.246195 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-01-10 13:45:01.384059 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=44339de3-07bb-4d03-9d3b-2e0777e51af2] 2026-01-10 13:45:01.802828 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=77c5dc10-1db3-4eb7-96b3-7516ed6edf54] 2026-01-10 13:45:02.313042 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=d5f4e9ae-f33f-49d2-8c7e-c1456b53e9ed] 2026-01-10 13:45:02.317709 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-01-10 13:45:04.602008 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=de778ce0-4f6d-44be-b211-86e0cabeb927] 2026-01-10 13:45:04.609842 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=e49f993b-cdad-4d7e-9728-2cad134db285] 2026-01-10 13:45:04.644205 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=c01e5313-3aea-4f62-a892-f2183ae77e08] 2026-01-10 13:45:04.648096 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=e7f1afe6-f3aa-449f-9835-56bec5ec9c51] 2026-01-10 13:45:04.665465 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=1f1914ea-fd0f-4c7c-b7b9-c351b421a456] 2026-01-10 13:45:04.677897 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=f73b3cf0-297a-486b-9939-ee112393da40] 2026-01-10 13:45:06.427527 | orchestrator | openstack_networking_router_v2.router: Creation complete after 4s [id=4eed9caf-c1f5-41f7-94eb-659c362ea8b1] 2026-01-10 13:45:06.432632 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-01-10 13:45:06.433452 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-01-10 13:45:06.434781 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-01-10 13:45:06.897480 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=1acbf866-d48d-4c59-b406-ba3de5d1c0c4] 2026-01-10 13:45:06.907451 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-01-10 13:45:06.907542 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-01-10 13:45:06.909548 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-01-10 13:45:06.910322 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-01-10 13:45:06.910464 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-01-10 13:45:06.911685 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-01-10 13:45:07.050726 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=9b2fc862-f69f-4a7c-bbce-4a0fda7fc906] 2026-01-10 13:45:07.055486 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-01-10 13:45:07.061723 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-01-10 13:45:07.066247 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-01-10 13:45:07.094444 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=0752327d-5af1-44d3-b8c2-15c68a778fb3] 2026-01-10 13:45:07.100661 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-01-10 13:45:07.251759 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=0ba06126-acd2-418a-bf38-04ac99a78aa0] 2026-01-10 13:45:07.263699 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-01-10 13:45:07.370083 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=92ac3bd8-002b-412a-9bad-bc1b6210884b] 2026-01-10 13:45:07.383714 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-01-10 13:45:07.426070 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=39e619ce-d94e-464b-9452-255d9ffc7c4f] 2026-01-10 13:45:07.437390 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-01-10 13:45:07.651618 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=b1fdac5e-655e-4351-818d-2f1fe951041c] 2026-01-10 13:45:07.665625 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-01-10 13:45:07.672921 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=aa9dd122-1d06-4920-9886-e30a6381a70b] 2026-01-10 13:45:07.686826 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-01-10 13:45:07.831466 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=78652519-31d2-497d-976c-2e29736c2c7b] 2026-01-10 13:45:07.845306 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-01-10 13:45:07.915972 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=34a15884-b97a-41c0-b32a-92fd0f2dcd3a] 2026-01-10 13:45:08.051710 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=823370be-01de-4e3d-843b-1381d7bc86d3] 2026-01-10 13:45:08.090122 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=f9183ee1-95b4-45ce-9f4d-ba8a2a522a34] 2026-01-10 13:45:08.227808 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=f894c187-b1e4-4df3-92de-3337a802535e] 2026-01-10 13:45:08.264092 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=6398ef0d-89b7-42e3-a0d4-051ff2669cd6] 2026-01-10 13:45:08.366807 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 0s [id=e633335b-8cfe-4e2d-885e-c58e3ca2aa4e] 2026-01-10 13:45:08.379850 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 0s [id=5f809922-599e-48be-9dba-dfc332b315c5] 2026-01-10 13:45:08.508620 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 2s [id=3129b4fa-728a-49e1-ae08-d2d92089015b] 2026-01-10 13:45:09.142182 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=b355c9ad-bc69-46ae-b50c-5173bc3f8795] 2026-01-10 13:45:09.665187 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 4s [id=cbcfe2b3-edc4-4530-b402-01d2228fc2f3] 2026-01-10 13:45:09.683862 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-01-10 13:45:09.709022 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-01-10 13:45:09.709263 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-01-10 13:45:09.709669 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-01-10 13:45:09.718105 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-01-10 13:45:09.727263 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-01-10 13:45:09.728222 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-01-10 13:45:11.247270 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=8d49b372-9bf2-44f7-b9b5-e71a58f35ca5] 2026-01-10 13:45:11.253201 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-01-10 13:45:11.261031 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-01-10 13:45:11.263966 | orchestrator | local_file.inventory: Creating... 2026-01-10 13:45:11.264711 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=9236c7a95a534cb1b3f4d328d7a3a9e10d27793b] 2026-01-10 13:45:11.266363 | orchestrator | local_file.inventory: Creation complete after 0s [id=15ac165fdb8762c23bb3f140c5838859b188dec2] 2026-01-10 13:45:12.671125 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 2s [id=8d49b372-9bf2-44f7-b9b5-e71a58f35ca5] 2026-01-10 13:45:19.710242 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-01-10 13:45:19.710382 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-01-10 13:45:19.710412 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-01-10 13:45:19.728220 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-01-10 13:45:19.728370 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-01-10 13:45:19.729361 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-01-10 13:45:29.710517 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-01-10 13:45:29.710638 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-01-10 13:45:29.710650 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-01-10 13:45:29.729049 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-01-10 13:45:29.729119 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-01-10 13:45:29.730175 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-01-10 13:45:39.719509 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-01-10 13:45:39.719612 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-01-10 13:45:39.719619 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-01-10 13:45:39.729998 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-01-10 13:45:39.730141 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-01-10 13:45:39.731123 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-01-10 13:45:40.332888 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 30s [id=39f0ebd8-f5d4-4955-80e5-b25a8a95a307] 2026-01-10 13:45:40.590715 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=cdad93b8-054d-4d38-95ca-2ec6503b07c0] 2026-01-10 13:45:49.719941 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-01-10 13:45:49.730234 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-01-10 13:45:49.730366 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-01-10 13:45:49.731376 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-01-10 13:45:50.665380 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 41s [id=9db2a13a-b88a-461e-9fa6-6f3bf833e329] 2026-01-10 13:45:59.720328 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [50s elapsed] 2026-01-10 13:45:59.730583 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [50s elapsed] 2026-01-10 13:45:59.731674 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [50s elapsed] 2026-01-10 13:46:00.482543 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 50s [id=d5959e6a-18fc-4f26-bb3d-da5d74de4021] 2026-01-10 13:46:00.598895 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 51s [id=283c89ed-f24c-4b5e-b187-50c0acb08cac] 2026-01-10 13:46:09.730980 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [1m0s elapsed] 2026-01-10 13:46:11.096721 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 1m1s [id=7d7ee420-239a-494b-8b76-9a82c36455c8] 2026-01-10 13:46:11.122145 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-01-10 13:46:11.132172 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-01-10 13:46:11.135502 | orchestrator | null_resource.node_semaphore: Creating... 2026-01-10 13:46:11.141542 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-01-10 13:46:11.144648 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=14368470862239029] 2026-01-10 13:46:11.156817 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-01-10 13:46:11.192981 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-01-10 13:46:11.193068 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-01-10 13:46:11.193076 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-01-10 13:46:11.199993 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-01-10 13:46:11.206190 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-01-10 13:46:11.213635 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-01-10 13:46:14.535437 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=cdad93b8-054d-4d38-95ca-2ec6503b07c0/c56b24e3-125a-48ee-acc4-7420ce900c20] 2026-01-10 13:46:14.556013 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=283c89ed-f24c-4b5e-b187-50c0acb08cac/b1ccfdf8-8ebc-4581-b39b-71c057731eea] 2026-01-10 13:46:14.580054 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=7d7ee420-239a-494b-8b76-9a82c36455c8/573345f2-5167-4bb0-bd40-d392a39279fc] 2026-01-10 13:46:14.603430 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=cdad93b8-054d-4d38-95ca-2ec6503b07c0/09bcf5bb-a136-4209-bef3-37a648ec73be] 2026-01-10 13:46:14.627027 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=7d7ee420-239a-494b-8b76-9a82c36455c8/7172e707-12eb-4bf8-889d-ca95993faa89] 2026-01-10 13:46:14.639365 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=283c89ed-f24c-4b5e-b187-50c0acb08cac/c3c9ac61-c03e-421c-9f43-37b1f8399a84] 2026-01-10 13:46:20.740887 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 10s [id=283c89ed-f24c-4b5e-b187-50c0acb08cac/15b7e91d-d5fd-4068-ac98-0857e3d5fdf2] 2026-01-10 13:46:20.741296 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 10s [id=7d7ee420-239a-494b-8b76-9a82c36455c8/f0ceade3-8439-47c2-ab29-dba6a2d0af37] 2026-01-10 13:46:20.769201 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 10s [id=cdad93b8-054d-4d38-95ca-2ec6503b07c0/44339de3-07bb-4d03-9d3b-2e0777e51af2] 2026-01-10 13:46:21.213666 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-01-10 13:46:31.214448 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-01-10 13:46:31.623691 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=560344b7-97e2-4356-a572-a6ab6981f1de] 2026-01-10 13:46:31.643323 | orchestrator | 2026-01-10 13:46:31.643430 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-01-10 13:46:31.643442 | orchestrator | 2026-01-10 13:46:31.643454 | orchestrator | Outputs: 2026-01-10 13:46:31.643461 | orchestrator | 2026-01-10 13:46:31.643467 | orchestrator | manager_address = 2026-01-10 13:46:31.643474 | orchestrator | private_key = 2026-01-10 13:46:31.978636 | orchestrator | ok: Runtime: 0:01:40.343408 2026-01-10 13:46:32.014359 | 2026-01-10 13:46:32.014505 | TASK [Fetch manager address] 2026-01-10 13:46:32.551539 | orchestrator | ok 2026-01-10 13:46:32.562161 | 2026-01-10 13:46:32.562306 | TASK [Set manager_host address] 2026-01-10 13:46:32.634762 | orchestrator | ok 2026-01-10 13:46:32.642504 | 2026-01-10 13:46:32.642645 | LOOP [Update ansible collections] 2026-01-10 13:46:33.768957 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-10 13:46:33.770008 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-10 13:46:33.770118 | orchestrator | Starting galaxy collection install process 2026-01-10 13:46:33.770164 | orchestrator | Process install dependency map 2026-01-10 13:46:33.770684 | orchestrator | Starting collection install process 2026-01-10 13:46:33.770777 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2026-01-10 13:46:33.770823 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2026-01-10 13:46:33.770887 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-01-10 13:46:33.771035 | orchestrator | ok: Item: commons Runtime: 0:00:00.715715 2026-01-10 13:46:34.814267 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-10 13:46:34.814487 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-10 13:46:34.814559 | orchestrator | Starting galaxy collection install process 2026-01-10 13:46:34.814607 | orchestrator | Process install dependency map 2026-01-10 13:46:34.814650 | orchestrator | Starting collection install process 2026-01-10 13:46:34.814690 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2026-01-10 13:46:34.814731 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2026-01-10 13:46:34.814772 | orchestrator | osism.services:999.0.0 was installed successfully 2026-01-10 13:46:34.814864 | orchestrator | ok: Item: services Runtime: 0:00:00.739181 2026-01-10 13:46:34.834900 | 2026-01-10 13:46:34.835096 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-10 13:46:47.498889 | orchestrator | ok 2026-01-10 13:46:47.509891 | 2026-01-10 13:46:47.510060 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-10 13:47:47.562196 | orchestrator | ok 2026-01-10 13:47:47.572649 | 2026-01-10 13:47:47.572792 | TASK [Fetch manager ssh hostkey] 2026-01-10 13:47:49.150758 | orchestrator | Output suppressed because no_log was given 2026-01-10 13:47:49.165666 | 2026-01-10 13:47:49.165832 | TASK [Get ssh keypair from terraform environment] 2026-01-10 13:47:49.708091 | orchestrator | ok: Runtime: 0:00:00.006912 2026-01-10 13:47:49.716239 | 2026-01-10 13:47:49.716382 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-10 13:47:49.767845 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-01-10 13:47:49.779158 | 2026-01-10 13:47:49.779322 | TASK [Run manager part 0] 2026-01-10 13:47:50.818751 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-10 13:47:50.871566 | orchestrator | 2026-01-10 13:47:50.871631 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-01-10 13:47:50.871642 | orchestrator | 2026-01-10 13:47:50.871662 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-01-10 13:47:52.963574 | orchestrator | ok: [testbed-manager] 2026-01-10 13:47:52.963625 | orchestrator | 2026-01-10 13:47:52.963646 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-10 13:47:52.963656 | orchestrator | 2026-01-10 13:47:52.963772 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-10 13:47:54.993545 | orchestrator | ok: [testbed-manager] 2026-01-10 13:47:54.993617 | orchestrator | 2026-01-10 13:47:54.993627 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-10 13:47:55.725085 | orchestrator | ok: [testbed-manager] 2026-01-10 13:47:55.725138 | orchestrator | 2026-01-10 13:47:55.725145 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-10 13:47:55.762811 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:47:55.762860 | orchestrator | 2026-01-10 13:47:55.762870 | orchestrator | TASK [Update package cache] **************************************************** 2026-01-10 13:47:55.795219 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:47:55.795348 | orchestrator | 2026-01-10 13:47:55.795372 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-10 13:47:55.844951 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:47:55.845211 | orchestrator | 2026-01-10 13:47:55.845289 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-10 13:47:55.879071 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:47:55.879153 | orchestrator | 2026-01-10 13:47:55.879167 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-10 13:47:55.910466 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:47:55.910595 | orchestrator | 2026-01-10 13:47:55.910605 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-01-10 13:47:55.947579 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:47:55.947636 | orchestrator | 2026-01-10 13:47:55.947646 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-01-10 13:47:55.984722 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:47:55.984781 | orchestrator | 2026-01-10 13:47:55.984790 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-01-10 13:47:56.779742 | orchestrator | changed: [testbed-manager] 2026-01-10 13:47:56.780912 | orchestrator | 2026-01-10 13:47:56.780955 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-01-10 13:50:47.568757 | orchestrator | changed: [testbed-manager] 2026-01-10 13:50:47.568856 | orchestrator | 2026-01-10 13:50:47.568868 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-10 13:52:07.541036 | orchestrator | changed: [testbed-manager] 2026-01-10 13:52:07.541146 | orchestrator | 2026-01-10 13:52:07.541173 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-10 13:52:29.902810 | orchestrator | changed: [testbed-manager] 2026-01-10 13:52:29.902878 | orchestrator | 2026-01-10 13:52:29.902894 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-10 13:52:40.149923 | orchestrator | changed: [testbed-manager] 2026-01-10 13:52:40.150054 | orchestrator | 2026-01-10 13:52:40.150076 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-10 13:52:40.200699 | orchestrator | ok: [testbed-manager] 2026-01-10 13:52:40.200779 | orchestrator | 2026-01-10 13:52:40.200794 | orchestrator | TASK [Get current user] ******************************************************** 2026-01-10 13:52:41.047328 | orchestrator | ok: [testbed-manager] 2026-01-10 13:52:41.047443 | orchestrator | 2026-01-10 13:52:41.047463 | orchestrator | TASK [Create venv directory] *************************************************** 2026-01-10 13:52:41.856780 | orchestrator | changed: [testbed-manager] 2026-01-10 13:52:41.856828 | orchestrator | 2026-01-10 13:52:41.856837 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-01-10 13:52:48.535770 | orchestrator | changed: [testbed-manager] 2026-01-10 13:52:48.535819 | orchestrator | 2026-01-10 13:52:48.536066 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-01-10 13:52:55.181377 | orchestrator | changed: [testbed-manager] 2026-01-10 13:52:55.181427 | orchestrator | 2026-01-10 13:52:55.181438 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-01-10 13:52:57.976015 | orchestrator | changed: [testbed-manager] 2026-01-10 13:52:57.976058 | orchestrator | 2026-01-10 13:52:57.976067 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-01-10 13:52:59.849843 | orchestrator | changed: [testbed-manager] 2026-01-10 13:52:59.850425 | orchestrator | 2026-01-10 13:52:59.850443 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-01-10 13:53:01.035251 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-10 13:53:01.035337 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-10 13:53:01.035351 | orchestrator | 2026-01-10 13:53:01.035363 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-01-10 13:53:01.079839 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-10 13:53:01.079920 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-10 13:53:01.079934 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-10 13:53:01.079947 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-10 13:53:04.425135 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-10 13:53:04.425259 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-10 13:53:04.425275 | orchestrator | 2026-01-10 13:53:04.425287 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-01-10 13:53:05.052990 | orchestrator | changed: [testbed-manager] 2026-01-10 13:53:05.053222 | orchestrator | 2026-01-10 13:53:05.053247 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-01-10 13:56:24.701879 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-01-10 13:56:24.701953 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-01-10 13:56:24.701962 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-01-10 13:56:24.701968 | orchestrator | 2026-01-10 13:56:24.701975 | orchestrator | TASK [Install local collections] *********************************************** 2026-01-10 13:56:27.191449 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-01-10 13:56:27.191549 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-01-10 13:56:27.191566 | orchestrator | 2026-01-10 13:56:27.191580 | orchestrator | PLAY [Create operator user] **************************************************** 2026-01-10 13:56:27.191592 | orchestrator | 2026-01-10 13:56:27.191603 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-10 13:56:28.682632 | orchestrator | ok: [testbed-manager] 2026-01-10 13:56:28.682739 | orchestrator | 2026-01-10 13:56:28.682758 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-10 13:56:28.735526 | orchestrator | ok: [testbed-manager] 2026-01-10 13:56:28.735617 | orchestrator | 2026-01-10 13:56:28.735632 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-10 13:56:28.797014 | orchestrator | ok: [testbed-manager] 2026-01-10 13:56:28.797110 | orchestrator | 2026-01-10 13:56:28.797126 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-10 13:56:29.626364 | orchestrator | changed: [testbed-manager] 2026-01-10 13:56:29.626457 | orchestrator | 2026-01-10 13:56:29.626474 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-10 13:56:30.495136 | orchestrator | changed: [testbed-manager] 2026-01-10 13:56:30.495267 | orchestrator | 2026-01-10 13:56:30.495286 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-10 13:56:32.090542 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-01-10 13:56:32.090645 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-01-10 13:56:32.090661 | orchestrator | 2026-01-10 13:56:32.090688 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-10 13:56:33.534267 | orchestrator | changed: [testbed-manager] 2026-01-10 13:56:33.534393 | orchestrator | 2026-01-10 13:56:33.534410 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-10 13:56:35.402922 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-01-10 13:56:35.403026 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-01-10 13:56:35.403044 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-01-10 13:56:35.403057 | orchestrator | 2026-01-10 13:56:35.403070 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-10 13:56:35.467060 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:56:35.467119 | orchestrator | 2026-01-10 13:56:35.467132 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-10 13:56:35.538330 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:56:35.538372 | orchestrator | 2026-01-10 13:56:35.538381 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-10 13:56:36.100122 | orchestrator | changed: [testbed-manager] 2026-01-10 13:56:36.100247 | orchestrator | 2026-01-10 13:56:36.100264 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-10 13:56:36.173928 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:56:36.174050 | orchestrator | 2026-01-10 13:56:36.174068 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-10 13:56:37.078966 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-10 13:56:37.079069 | orchestrator | changed: [testbed-manager] 2026-01-10 13:56:37.079087 | orchestrator | 2026-01-10 13:56:37.079100 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-10 13:56:37.116913 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:56:37.117004 | orchestrator | 2026-01-10 13:56:37.117019 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-10 13:56:37.147722 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:56:37.147819 | orchestrator | 2026-01-10 13:56:37.147838 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-10 13:56:37.180230 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:56:37.180317 | orchestrator | 2026-01-10 13:56:37.180335 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-10 13:56:37.252052 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:56:37.252150 | orchestrator | 2026-01-10 13:56:37.252191 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-10 13:56:37.970383 | orchestrator | ok: [testbed-manager] 2026-01-10 13:56:37.970498 | orchestrator | 2026-01-10 13:56:37.970515 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-10 13:56:37.970529 | orchestrator | 2026-01-10 13:56:37.970540 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-10 13:56:39.409638 | orchestrator | ok: [testbed-manager] 2026-01-10 13:56:39.409732 | orchestrator | 2026-01-10 13:56:39.409748 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-01-10 13:56:40.412611 | orchestrator | changed: [testbed-manager] 2026-01-10 13:56:40.412656 | orchestrator | 2026-01-10 13:56:40.412664 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 13:56:40.412673 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-01-10 13:56:40.412679 | orchestrator | 2026-01-10 13:56:40.671849 | orchestrator | ok: Runtime: 0:08:50.403234 2026-01-10 13:56:40.689148 | 2026-01-10 13:56:40.689308 | TASK [Point out that the log in on the manager is now possible] 2026-01-10 13:56:40.734526 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-01-10 13:56:40.746459 | 2026-01-10 13:56:40.746691 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-10 13:56:40.796285 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-01-10 13:56:40.806064 | 2026-01-10 13:56:40.806216 | TASK [Run manager part 1 + 2] 2026-01-10 13:56:41.693367 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-10 13:56:41.754397 | orchestrator | 2026-01-10 13:56:41.754485 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-01-10 13:56:41.754502 | orchestrator | 2026-01-10 13:56:41.754532 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-10 13:56:44.781443 | orchestrator | ok: [testbed-manager] 2026-01-10 13:56:44.781624 | orchestrator | 2026-01-10 13:56:44.781703 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-10 13:56:44.831262 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:56:44.831348 | orchestrator | 2026-01-10 13:56:44.831368 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-10 13:56:44.881656 | orchestrator | ok: [testbed-manager] 2026-01-10 13:56:44.881718 | orchestrator | 2026-01-10 13:56:44.881728 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-10 13:56:44.926697 | orchestrator | ok: [testbed-manager] 2026-01-10 13:56:44.926790 | orchestrator | 2026-01-10 13:56:44.926809 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-10 13:56:45.011358 | orchestrator | ok: [testbed-manager] 2026-01-10 13:56:45.011417 | orchestrator | 2026-01-10 13:56:45.011425 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-10 13:56:45.068067 | orchestrator | ok: [testbed-manager] 2026-01-10 13:56:45.068153 | orchestrator | 2026-01-10 13:56:45.068192 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-10 13:56:45.126868 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-01-10 13:56:45.126956 | orchestrator | 2026-01-10 13:56:45.126972 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-10 13:56:45.856612 | orchestrator | ok: [testbed-manager] 2026-01-10 13:56:45.856692 | orchestrator | 2026-01-10 13:56:45.856706 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-10 13:56:45.907685 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:56:45.907787 | orchestrator | 2026-01-10 13:56:45.907810 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-10 13:56:47.361997 | orchestrator | changed: [testbed-manager] 2026-01-10 13:56:47.362133 | orchestrator | 2026-01-10 13:56:47.362154 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-10 13:56:47.965112 | orchestrator | ok: [testbed-manager] 2026-01-10 13:56:47.966608 | orchestrator | 2026-01-10 13:56:47.966626 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-10 13:56:49.173652 | orchestrator | changed: [testbed-manager] 2026-01-10 13:56:49.173763 | orchestrator | 2026-01-10 13:56:49.173785 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-10 13:57:05.047544 | orchestrator | changed: [testbed-manager] 2026-01-10 13:57:05.049028 | orchestrator | 2026-01-10 13:57:05.049060 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-10 13:57:05.791969 | orchestrator | ok: [testbed-manager] 2026-01-10 13:57:05.792083 | orchestrator | 2026-01-10 13:57:05.792112 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-10 13:57:05.848627 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:57:05.848718 | orchestrator | 2026-01-10 13:57:05.848733 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-01-10 13:57:06.859280 | orchestrator | changed: [testbed-manager] 2026-01-10 13:57:06.859348 | orchestrator | 2026-01-10 13:57:06.859366 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-01-10 13:57:07.857087 | orchestrator | changed: [testbed-manager] 2026-01-10 13:57:07.857173 | orchestrator | 2026-01-10 13:57:07.857219 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-01-10 13:57:08.446692 | orchestrator | changed: [testbed-manager] 2026-01-10 13:57:08.446733 | orchestrator | 2026-01-10 13:57:08.446740 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-01-10 13:57:08.500592 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-10 13:57:08.500720 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-10 13:57:08.500739 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-10 13:57:08.500751 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-10 13:57:10.634314 | orchestrator | changed: [testbed-manager] 2026-01-10 13:57:10.634446 | orchestrator | 2026-01-10 13:57:10.634466 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-01-10 13:57:19.753562 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-01-10 13:57:19.753622 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-01-10 13:57:19.753632 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-01-10 13:57:19.753639 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-01-10 13:57:19.753649 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-01-10 13:57:19.753655 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-01-10 13:57:19.753661 | orchestrator | 2026-01-10 13:57:19.753667 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-01-10 13:57:20.886311 | orchestrator | changed: [testbed-manager] 2026-01-10 13:57:20.886415 | orchestrator | 2026-01-10 13:57:20.886432 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-01-10 13:57:20.928158 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:57:20.928273 | orchestrator | 2026-01-10 13:57:20.928283 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-01-10 13:57:23.867390 | orchestrator | changed: [testbed-manager] 2026-01-10 13:57:23.867448 | orchestrator | 2026-01-10 13:57:23.867456 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-01-10 13:57:23.915375 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:57:23.915444 | orchestrator | 2026-01-10 13:57:23.915454 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-01-10 13:59:06.345079 | orchestrator | changed: [testbed-manager] 2026-01-10 13:59:06.345131 | orchestrator | 2026-01-10 13:59:06.345139 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-10 13:59:07.548125 | orchestrator | ok: [testbed-manager] 2026-01-10 13:59:07.548232 | orchestrator | 2026-01-10 13:59:07.548249 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 13:59:07.548265 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-01-10 13:59:07.548277 | orchestrator | 2026-01-10 13:59:07.950906 | orchestrator | ok: Runtime: 0:02:26.542198 2026-01-10 13:59:07.969340 | 2026-01-10 13:59:07.969501 | TASK [Reboot manager] 2026-01-10 13:59:09.508414 | orchestrator | ok: Runtime: 0:00:00.971894 2026-01-10 13:59:09.526058 | 2026-01-10 13:59:09.526245 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-10 13:59:25.962183 | orchestrator | ok 2026-01-10 13:59:25.972651 | 2026-01-10 13:59:25.972782 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-10 14:00:26.027651 | orchestrator | ok 2026-01-10 14:00:26.040227 | 2026-01-10 14:00:26.040395 | TASK [Deploy manager + bootstrap nodes] 2026-01-10 14:00:28.871096 | orchestrator | 2026-01-10 14:00:28.871333 | orchestrator | # DEPLOY MANAGER 2026-01-10 14:00:28.871358 | orchestrator | 2026-01-10 14:00:28.871411 | orchestrator | + set -e 2026-01-10 14:00:28.871425 | orchestrator | + echo 2026-01-10 14:00:28.871440 | orchestrator | + echo '# DEPLOY MANAGER' 2026-01-10 14:00:28.871458 | orchestrator | + echo 2026-01-10 14:00:28.871512 | orchestrator | + cat /opt/manager-vars.sh 2026-01-10 14:00:28.874953 | orchestrator | export NUMBER_OF_NODES=6 2026-01-10 14:00:28.874981 | orchestrator | 2026-01-10 14:00:28.874994 | orchestrator | export CEPH_VERSION=reef 2026-01-10 14:00:28.875007 | orchestrator | export CONFIGURATION_VERSION=main 2026-01-10 14:00:28.875020 | orchestrator | export MANAGER_VERSION=9.5.0 2026-01-10 14:00:28.875042 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-01-10 14:00:28.875053 | orchestrator | 2026-01-10 14:00:28.875071 | orchestrator | export ARA=false 2026-01-10 14:00:28.875083 | orchestrator | export DEPLOY_MODE=manager 2026-01-10 14:00:28.875101 | orchestrator | export TEMPEST=false 2026-01-10 14:00:28.875113 | orchestrator | export IS_ZUUL=true 2026-01-10 14:00:28.875124 | orchestrator | 2026-01-10 14:00:28.875142 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.106 2026-01-10 14:00:28.875154 | orchestrator | export EXTERNAL_API=false 2026-01-10 14:00:28.875165 | orchestrator | 2026-01-10 14:00:28.875176 | orchestrator | export IMAGE_USER=ubuntu 2026-01-10 14:00:28.875191 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-01-10 14:00:28.875202 | orchestrator | 2026-01-10 14:00:28.875213 | orchestrator | export CEPH_STACK=ceph-ansible 2026-01-10 14:00:28.875229 | orchestrator | 2026-01-10 14:00:28.875241 | orchestrator | + echo 2026-01-10 14:00:28.875254 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-10 14:00:28.876315 | orchestrator | ++ export INTERACTIVE=false 2026-01-10 14:00:28.876332 | orchestrator | ++ INTERACTIVE=false 2026-01-10 14:00:28.876347 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-10 14:00:28.876358 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-10 14:00:28.876517 | orchestrator | + source /opt/manager-vars.sh 2026-01-10 14:00:28.876534 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-10 14:00:28.876545 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-10 14:00:28.876556 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-10 14:00:28.876567 | orchestrator | ++ CEPH_VERSION=reef 2026-01-10 14:00:28.876825 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-10 14:00:28.876841 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-10 14:00:28.876852 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-10 14:00:28.876863 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-10 14:00:28.876874 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-10 14:00:28.876895 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-10 14:00:28.876906 | orchestrator | ++ export ARA=false 2026-01-10 14:00:28.876917 | orchestrator | ++ ARA=false 2026-01-10 14:00:28.876928 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-10 14:00:28.876939 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-10 14:00:28.876950 | orchestrator | ++ export TEMPEST=false 2026-01-10 14:00:28.876961 | orchestrator | ++ TEMPEST=false 2026-01-10 14:00:28.876971 | orchestrator | ++ export IS_ZUUL=true 2026-01-10 14:00:28.876982 | orchestrator | ++ IS_ZUUL=true 2026-01-10 14:00:28.876993 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.106 2026-01-10 14:00:28.877004 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.106 2026-01-10 14:00:28.877015 | orchestrator | ++ export EXTERNAL_API=false 2026-01-10 14:00:28.877026 | orchestrator | ++ EXTERNAL_API=false 2026-01-10 14:00:28.877037 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-10 14:00:28.877048 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-10 14:00:28.877058 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-10 14:00:28.877069 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-10 14:00:28.877081 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-10 14:00:28.877091 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-10 14:00:28.877103 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-01-10 14:00:28.938094 | orchestrator | + docker version 2026-01-10 14:00:29.234168 | orchestrator | Client: Docker Engine - Community 2026-01-10 14:00:29.234305 | orchestrator | Version: 27.5.1 2026-01-10 14:00:29.234322 | orchestrator | API version: 1.47 2026-01-10 14:00:29.234334 | orchestrator | Go version: go1.22.11 2026-01-10 14:00:29.234346 | orchestrator | Git commit: 9f9e405 2026-01-10 14:00:29.234357 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-10 14:00:29.234406 | orchestrator | OS/Arch: linux/amd64 2026-01-10 14:00:29.234417 | orchestrator | Context: default 2026-01-10 14:00:29.234428 | orchestrator | 2026-01-10 14:00:29.234439 | orchestrator | Server: Docker Engine - Community 2026-01-10 14:00:29.234450 | orchestrator | Engine: 2026-01-10 14:00:29.234462 | orchestrator | Version: 27.5.1 2026-01-10 14:00:29.234474 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-01-10 14:00:29.234525 | orchestrator | Go version: go1.22.11 2026-01-10 14:00:29.234537 | orchestrator | Git commit: 4c9b3b0 2026-01-10 14:00:29.234547 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-10 14:00:29.234558 | orchestrator | OS/Arch: linux/amd64 2026-01-10 14:00:29.234569 | orchestrator | Experimental: false 2026-01-10 14:00:29.234580 | orchestrator | containerd: 2026-01-10 14:00:29.234591 | orchestrator | Version: v2.2.1 2026-01-10 14:00:29.234602 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-01-10 14:00:29.234614 | orchestrator | runc: 2026-01-10 14:00:29.234625 | orchestrator | Version: 1.3.4 2026-01-10 14:00:29.234636 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-01-10 14:00:29.234647 | orchestrator | docker-init: 2026-01-10 14:00:29.234657 | orchestrator | Version: 0.19.0 2026-01-10 14:00:29.234669 | orchestrator | GitCommit: de40ad0 2026-01-10 14:00:29.238413 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-01-10 14:00:29.248666 | orchestrator | + set -e 2026-01-10 14:00:29.248704 | orchestrator | + source /opt/manager-vars.sh 2026-01-10 14:00:29.248719 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-10 14:00:29.248731 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-10 14:00:29.248742 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-10 14:00:29.248753 | orchestrator | ++ CEPH_VERSION=reef 2026-01-10 14:00:29.248765 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-10 14:00:29.248776 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-10 14:00:29.248787 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-10 14:00:29.248798 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-10 14:00:29.248809 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-10 14:00:29.248820 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-10 14:00:29.248838 | orchestrator | ++ export ARA=false 2026-01-10 14:00:29.248850 | orchestrator | ++ ARA=false 2026-01-10 14:00:29.248861 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-10 14:00:29.248872 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-10 14:00:29.248882 | orchestrator | ++ export TEMPEST=false 2026-01-10 14:00:29.248893 | orchestrator | ++ TEMPEST=false 2026-01-10 14:00:29.248904 | orchestrator | ++ export IS_ZUUL=true 2026-01-10 14:00:29.248915 | orchestrator | ++ IS_ZUUL=true 2026-01-10 14:00:29.248926 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.106 2026-01-10 14:00:29.248937 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.106 2026-01-10 14:00:29.248947 | orchestrator | ++ export EXTERNAL_API=false 2026-01-10 14:00:29.248958 | orchestrator | ++ EXTERNAL_API=false 2026-01-10 14:00:29.248968 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-10 14:00:29.248978 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-10 14:00:29.248989 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-10 14:00:29.249000 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-10 14:00:29.249011 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-10 14:00:29.249021 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-10 14:00:29.249032 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-10 14:00:29.249042 | orchestrator | ++ export INTERACTIVE=false 2026-01-10 14:00:29.249053 | orchestrator | ++ INTERACTIVE=false 2026-01-10 14:00:29.249063 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-10 14:00:29.249081 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-10 14:00:29.249096 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-01-10 14:00:29.249107 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-01-10 14:00:29.257025 | orchestrator | + set -e 2026-01-10 14:00:29.257044 | orchestrator | + VERSION=9.5.0 2026-01-10 14:00:29.257059 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-01-10 14:00:29.264252 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-01-10 14:00:29.264287 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-01-10 14:00:29.268849 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-01-10 14:00:29.272400 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-01-10 14:00:29.281175 | orchestrator | + set -e 2026-01-10 14:00:29.281273 | orchestrator | /opt/configuration ~ 2026-01-10 14:00:29.281292 | orchestrator | + pushd /opt/configuration 2026-01-10 14:00:29.281308 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-10 14:00:29.282849 | orchestrator | + source /opt/venv/bin/activate 2026-01-10 14:00:29.284829 | orchestrator | ++ deactivate nondestructive 2026-01-10 14:00:29.284850 | orchestrator | ++ '[' -n '' ']' 2026-01-10 14:00:29.284866 | orchestrator | ++ '[' -n '' ']' 2026-01-10 14:00:29.284906 | orchestrator | ++ hash -r 2026-01-10 14:00:29.284917 | orchestrator | ++ '[' -n '' ']' 2026-01-10 14:00:29.284928 | orchestrator | ++ unset VIRTUAL_ENV 2026-01-10 14:00:29.284939 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-01-10 14:00:29.284950 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-01-10 14:00:29.284961 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-01-10 14:00:29.284972 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-01-10 14:00:29.284983 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-01-10 14:00:29.284994 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-01-10 14:00:29.285006 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-10 14:00:29.285018 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-10 14:00:29.285029 | orchestrator | ++ export PATH 2026-01-10 14:00:29.285040 | orchestrator | ++ '[' -n '' ']' 2026-01-10 14:00:29.285051 | orchestrator | ++ '[' -z '' ']' 2026-01-10 14:00:29.285062 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-01-10 14:00:29.285072 | orchestrator | ++ PS1='(venv) ' 2026-01-10 14:00:29.285083 | orchestrator | ++ export PS1 2026-01-10 14:00:29.285094 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-01-10 14:00:29.285104 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-01-10 14:00:29.285115 | orchestrator | ++ hash -r 2026-01-10 14:00:29.285126 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-01-10 14:00:30.569693 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-01-10 14:00:30.570658 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-01-10 14:00:30.572231 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-01-10 14:00:30.573560 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-01-10 14:00:30.574753 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2026-01-10 14:00:30.584998 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-01-10 14:00:30.586256 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-01-10 14:00:30.587216 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-01-10 14:00:30.588442 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-01-10 14:00:30.623170 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-01-10 14:00:30.624525 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-01-10 14:00:30.626405 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-01-10 14:00:30.627489 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-01-10 14:00:30.631207 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-01-10 14:00:30.839831 | orchestrator | ++ which gilt 2026-01-10 14:00:30.847812 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-01-10 14:00:30.847862 | orchestrator | + /opt/venv/bin/gilt overlay 2026-01-10 14:00:31.104430 | orchestrator | osism.cfg-generics: 2026-01-10 14:00:31.277465 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-01-10 14:00:31.277590 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-01-10 14:00:31.277606 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-01-10 14:00:31.277621 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-01-10 14:00:31.929813 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-01-10 14:00:31.939395 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-01-10 14:00:32.279589 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-01-10 14:00:32.327266 | orchestrator | ~ 2026-01-10 14:00:32.327401 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-10 14:00:32.327419 | orchestrator | + deactivate 2026-01-10 14:00:32.327433 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-01-10 14:00:32.327446 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-10 14:00:32.327458 | orchestrator | + export PATH 2026-01-10 14:00:32.327469 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-01-10 14:00:32.327481 | orchestrator | + '[' -n '' ']' 2026-01-10 14:00:32.327496 | orchestrator | + hash -r 2026-01-10 14:00:32.327507 | orchestrator | + '[' -n '' ']' 2026-01-10 14:00:32.327518 | orchestrator | + unset VIRTUAL_ENV 2026-01-10 14:00:32.327529 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-01-10 14:00:32.327540 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-01-10 14:00:32.327552 | orchestrator | + unset -f deactivate 2026-01-10 14:00:32.327563 | orchestrator | + popd 2026-01-10 14:00:32.327796 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-01-10 14:00:32.327813 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-01-10 14:00:32.329319 | orchestrator | ++ semver 9.5.0 7.0.0 2026-01-10 14:00:32.395155 | orchestrator | + [[ 1 -ge 0 ]] 2026-01-10 14:00:32.395271 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-01-10 14:00:32.396008 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-01-10 14:00:32.456674 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-10 14:00:32.457358 | orchestrator | ++ semver 2024.2 2025.1 2026-01-10 14:00:32.522484 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-10 14:00:32.522606 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-01-10 14:00:32.625589 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-10 14:00:32.625737 | orchestrator | + source /opt/venv/bin/activate 2026-01-10 14:00:32.625762 | orchestrator | ++ deactivate nondestructive 2026-01-10 14:00:32.625784 | orchestrator | ++ '[' -n '' ']' 2026-01-10 14:00:32.625804 | orchestrator | ++ '[' -n '' ']' 2026-01-10 14:00:32.625841 | orchestrator | ++ hash -r 2026-01-10 14:00:32.625861 | orchestrator | ++ '[' -n '' ']' 2026-01-10 14:00:32.625880 | orchestrator | ++ unset VIRTUAL_ENV 2026-01-10 14:00:32.625899 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-01-10 14:00:32.625919 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-01-10 14:00:32.625938 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-01-10 14:00:32.625958 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-01-10 14:00:32.625977 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-01-10 14:00:32.625996 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-01-10 14:00:32.626064 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-10 14:00:32.626116 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-10 14:00:32.626138 | orchestrator | ++ export PATH 2026-01-10 14:00:32.626215 | orchestrator | ++ '[' -n '' ']' 2026-01-10 14:00:32.626237 | orchestrator | ++ '[' -z '' ']' 2026-01-10 14:00:32.626257 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-01-10 14:00:32.626270 | orchestrator | ++ PS1='(venv) ' 2026-01-10 14:00:32.626281 | orchestrator | ++ export PS1 2026-01-10 14:00:32.626292 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-01-10 14:00:32.626303 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-01-10 14:00:32.626315 | orchestrator | ++ hash -r 2026-01-10 14:00:32.626331 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-01-10 14:00:33.895610 | orchestrator | 2026-01-10 14:00:33.895729 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-01-10 14:00:33.895748 | orchestrator | 2026-01-10 14:00:33.895761 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-10 14:00:34.496028 | orchestrator | ok: [testbed-manager] 2026-01-10 14:00:34.496134 | orchestrator | 2026-01-10 14:00:34.496151 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-10 14:00:35.517581 | orchestrator | changed: [testbed-manager] 2026-01-10 14:00:35.517693 | orchestrator | 2026-01-10 14:00:35.517710 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-01-10 14:00:35.517749 | orchestrator | 2026-01-10 14:00:35.517762 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-10 14:00:37.845880 | orchestrator | ok: [testbed-manager] 2026-01-10 14:00:37.845991 | orchestrator | 2026-01-10 14:00:37.846010 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-01-10 14:00:37.901905 | orchestrator | ok: [testbed-manager] 2026-01-10 14:00:37.901994 | orchestrator | 2026-01-10 14:00:37.902010 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-01-10 14:00:38.367781 | orchestrator | changed: [testbed-manager] 2026-01-10 14:00:38.367890 | orchestrator | 2026-01-10 14:00:38.367910 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-01-10 14:00:38.410266 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:00:38.410360 | orchestrator | 2026-01-10 14:00:38.410398 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-10 14:00:38.779156 | orchestrator | changed: [testbed-manager] 2026-01-10 14:00:38.779245 | orchestrator | 2026-01-10 14:00:38.779257 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2026-01-10 14:00:38.842810 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:00:38.842936 | orchestrator | 2026-01-10 14:00:38.842963 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-01-10 14:00:39.179598 | orchestrator | ok: [testbed-manager] 2026-01-10 14:00:39.179711 | orchestrator | 2026-01-10 14:00:39.179738 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-01-10 14:00:39.327343 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:00:39.327499 | orchestrator | 2026-01-10 14:00:39.327517 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-01-10 14:00:39.327531 | orchestrator | 2026-01-10 14:00:39.327542 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-10 14:00:41.127099 | orchestrator | ok: [testbed-manager] 2026-01-10 14:00:41.127199 | orchestrator | 2026-01-10 14:00:41.127214 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-01-10 14:00:41.236193 | orchestrator | included: osism.services.traefik for testbed-manager 2026-01-10 14:00:41.236295 | orchestrator | 2026-01-10 14:00:41.236311 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-01-10 14:00:41.292890 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-01-10 14:00:41.293036 | orchestrator | 2026-01-10 14:00:41.293063 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-01-10 14:00:42.480317 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-01-10 14:00:42.480481 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-01-10 14:00:42.480502 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-01-10 14:00:42.480514 | orchestrator | 2026-01-10 14:00:42.480527 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-01-10 14:00:44.368730 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-01-10 14:00:44.368848 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-01-10 14:00:44.368864 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-01-10 14:00:44.368877 | orchestrator | 2026-01-10 14:00:44.368889 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-01-10 14:00:45.037722 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-10 14:00:45.037828 | orchestrator | changed: [testbed-manager] 2026-01-10 14:00:45.037845 | orchestrator | 2026-01-10 14:00:45.037858 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-01-10 14:00:45.690632 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-10 14:00:45.690737 | orchestrator | changed: [testbed-manager] 2026-01-10 14:00:45.690757 | orchestrator | 2026-01-10 14:00:45.690770 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-01-10 14:00:45.744705 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:00:45.744787 | orchestrator | 2026-01-10 14:00:45.744800 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-01-10 14:00:46.124096 | orchestrator | ok: [testbed-manager] 2026-01-10 14:00:46.124169 | orchestrator | 2026-01-10 14:00:46.124176 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-01-10 14:00:46.215418 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-01-10 14:00:46.215513 | orchestrator | 2026-01-10 14:00:46.215526 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-01-10 14:00:47.335180 | orchestrator | changed: [testbed-manager] 2026-01-10 14:00:47.335288 | orchestrator | 2026-01-10 14:00:47.335305 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-01-10 14:00:48.212652 | orchestrator | changed: [testbed-manager] 2026-01-10 14:00:48.212755 | orchestrator | 2026-01-10 14:00:48.212770 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-01-10 14:00:58.838857 | orchestrator | changed: [testbed-manager] 2026-01-10 14:00:58.838968 | orchestrator | 2026-01-10 14:00:58.839004 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-01-10 14:00:58.903233 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:00:58.903341 | orchestrator | 2026-01-10 14:00:58.903359 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-01-10 14:00:58.903373 | orchestrator | 2026-01-10 14:00:58.903384 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-10 14:01:00.707880 | orchestrator | ok: [testbed-manager] 2026-01-10 14:01:00.707989 | orchestrator | 2026-01-10 14:01:00.708007 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-01-10 14:01:00.831827 | orchestrator | included: osism.services.manager for testbed-manager 2026-01-10 14:01:00.831923 | orchestrator | 2026-01-10 14:01:00.831939 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-01-10 14:01:00.906842 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-01-10 14:01:00.906953 | orchestrator | 2026-01-10 14:01:00.906974 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-01-10 14:01:03.728801 | orchestrator | ok: [testbed-manager] 2026-01-10 14:01:03.728939 | orchestrator | 2026-01-10 14:01:03.728958 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-01-10 14:01:03.784819 | orchestrator | ok: [testbed-manager] 2026-01-10 14:01:03.784936 | orchestrator | 2026-01-10 14:01:03.784950 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-01-10 14:01:03.918772 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-01-10 14:01:03.918897 | orchestrator | 2026-01-10 14:01:03.918913 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-01-10 14:01:06.853333 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-01-10 14:01:06.853510 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-01-10 14:01:06.853525 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-01-10 14:01:06.853538 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-01-10 14:01:06.853548 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-01-10 14:01:06.853559 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-01-10 14:01:06.853569 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-01-10 14:01:06.853579 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-01-10 14:01:06.853589 | orchestrator | 2026-01-10 14:01:06.853604 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-01-10 14:01:07.478262 | orchestrator | changed: [testbed-manager] 2026-01-10 14:01:07.478366 | orchestrator | 2026-01-10 14:01:07.478376 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-01-10 14:01:08.143954 | orchestrator | changed: [testbed-manager] 2026-01-10 14:01:08.144074 | orchestrator | 2026-01-10 14:01:08.144091 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-01-10 14:01:08.215031 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-01-10 14:01:08.215182 | orchestrator | 2026-01-10 14:01:08.215197 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-01-10 14:01:09.501238 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-01-10 14:01:09.501372 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-01-10 14:01:09.501388 | orchestrator | 2026-01-10 14:01:09.501458 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-01-10 14:01:10.150289 | orchestrator | changed: [testbed-manager] 2026-01-10 14:01:10.150452 | orchestrator | 2026-01-10 14:01:10.150470 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-01-10 14:01:10.202224 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:01:10.202336 | orchestrator | 2026-01-10 14:01:10.202351 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-01-10 14:01:10.275788 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-01-10 14:01:10.275918 | orchestrator | 2026-01-10 14:01:10.275940 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-01-10 14:01:10.904471 | orchestrator | changed: [testbed-manager] 2026-01-10 14:01:10.904600 | orchestrator | 2026-01-10 14:01:10.904617 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-01-10 14:01:10.993493 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-01-10 14:01:10.993621 | orchestrator | 2026-01-10 14:01:10.993638 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-01-10 14:01:12.405904 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-10 14:01:12.406082 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-10 14:01:12.406098 | orchestrator | changed: [testbed-manager] 2026-01-10 14:01:12.406111 | orchestrator | 2026-01-10 14:01:12.406123 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-01-10 14:01:13.051227 | orchestrator | changed: [testbed-manager] 2026-01-10 14:01:13.051361 | orchestrator | 2026-01-10 14:01:13.051379 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-01-10 14:01:13.101506 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:01:13.101620 | orchestrator | 2026-01-10 14:01:13.101664 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-01-10 14:01:13.216710 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-01-10 14:01:13.216827 | orchestrator | 2026-01-10 14:01:13.216841 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-01-10 14:01:13.763958 | orchestrator | changed: [testbed-manager] 2026-01-10 14:01:13.764083 | orchestrator | 2026-01-10 14:01:13.764101 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-01-10 14:01:14.201303 | orchestrator | changed: [testbed-manager] 2026-01-10 14:01:14.201495 | orchestrator | 2026-01-10 14:01:14.201514 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-01-10 14:01:15.472878 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-01-10 14:01:15.473039 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-01-10 14:01:15.473067 | orchestrator | 2026-01-10 14:01:15.473090 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-01-10 14:01:16.120165 | orchestrator | changed: [testbed-manager] 2026-01-10 14:01:16.120273 | orchestrator | 2026-01-10 14:01:16.120291 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-01-10 14:01:16.530245 | orchestrator | ok: [testbed-manager] 2026-01-10 14:01:16.530355 | orchestrator | 2026-01-10 14:01:16.530380 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-01-10 14:01:16.892508 | orchestrator | changed: [testbed-manager] 2026-01-10 14:01:16.892616 | orchestrator | 2026-01-10 14:01:16.892633 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-01-10 14:01:16.933814 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:01:16.933975 | orchestrator | 2026-01-10 14:01:16.933995 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-01-10 14:01:17.009305 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-01-10 14:01:17.009451 | orchestrator | 2026-01-10 14:01:17.009470 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-01-10 14:01:17.047170 | orchestrator | ok: [testbed-manager] 2026-01-10 14:01:17.047262 | orchestrator | 2026-01-10 14:01:17.047276 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-01-10 14:01:19.159354 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-01-10 14:01:19.159504 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-01-10 14:01:19.159520 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-01-10 14:01:19.159531 | orchestrator | 2026-01-10 14:01:19.159542 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-01-10 14:01:19.883284 | orchestrator | changed: [testbed-manager] 2026-01-10 14:01:19.883383 | orchestrator | 2026-01-10 14:01:19.883400 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-01-10 14:01:20.607709 | orchestrator | changed: [testbed-manager] 2026-01-10 14:01:20.607812 | orchestrator | 2026-01-10 14:01:20.607830 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-01-10 14:01:21.329012 | orchestrator | changed: [testbed-manager] 2026-01-10 14:01:21.329120 | orchestrator | 2026-01-10 14:01:21.329139 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-01-10 14:01:21.413118 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-01-10 14:01:21.413214 | orchestrator | 2026-01-10 14:01:21.413229 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-01-10 14:01:21.469997 | orchestrator | ok: [testbed-manager] 2026-01-10 14:01:21.470162 | orchestrator | 2026-01-10 14:01:21.470182 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-01-10 14:01:22.226833 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-01-10 14:01:22.226943 | orchestrator | 2026-01-10 14:01:22.226953 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-01-10 14:01:22.318881 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-01-10 14:01:22.318963 | orchestrator | 2026-01-10 14:01:22.318974 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-01-10 14:01:23.070676 | orchestrator | changed: [testbed-manager] 2026-01-10 14:01:23.070798 | orchestrator | 2026-01-10 14:01:23.070813 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-01-10 14:01:23.687456 | orchestrator | ok: [testbed-manager] 2026-01-10 14:01:23.687586 | orchestrator | 2026-01-10 14:01:23.687604 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-01-10 14:01:23.743549 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:01:23.743661 | orchestrator | 2026-01-10 14:01:23.743678 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-01-10 14:01:23.796879 | orchestrator | ok: [testbed-manager] 2026-01-10 14:01:23.796984 | orchestrator | 2026-01-10 14:01:23.796999 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-01-10 14:01:24.620506 | orchestrator | changed: [testbed-manager] 2026-01-10 14:01:24.621813 | orchestrator | 2026-01-10 14:01:24.621853 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-01-10 14:02:34.201294 | orchestrator | changed: [testbed-manager] 2026-01-10 14:02:34.201495 | orchestrator | 2026-01-10 14:02:34.201518 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-01-10 14:02:35.300393 | orchestrator | ok: [testbed-manager] 2026-01-10 14:02:35.300647 | orchestrator | 2026-01-10 14:02:35.300678 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-01-10 14:02:35.357635 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:02:35.357750 | orchestrator | 2026-01-10 14:02:35.357765 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-01-10 14:02:38.151964 | orchestrator | changed: [testbed-manager] 2026-01-10 14:02:38.152103 | orchestrator | 2026-01-10 14:02:38.152127 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-01-10 14:02:38.239342 | orchestrator | ok: [testbed-manager] 2026-01-10 14:02:38.239502 | orchestrator | 2026-01-10 14:02:38.239520 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-10 14:02:38.239533 | orchestrator | 2026-01-10 14:02:38.239545 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-01-10 14:02:38.302362 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:02:38.302515 | orchestrator | 2026-01-10 14:02:38.302532 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-01-10 14:03:38.351212 | orchestrator | Pausing for 60 seconds 2026-01-10 14:03:38.351345 | orchestrator | changed: [testbed-manager] 2026-01-10 14:03:38.351361 | orchestrator | 2026-01-10 14:03:38.351375 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-01-10 14:03:41.960662 | orchestrator | changed: [testbed-manager] 2026-01-10 14:03:41.960790 | orchestrator | 2026-01-10 14:03:41.960806 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-01-10 14:04:43.911597 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-01-10 14:04:43.911768 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-01-10 14:04:43.911787 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-01-10 14:04:43.911801 | orchestrator | changed: [testbed-manager] 2026-01-10 14:04:43.911815 | orchestrator | 2026-01-10 14:04:43.911828 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-01-10 14:04:54.713629 | orchestrator | changed: [testbed-manager] 2026-01-10 14:04:54.713778 | orchestrator | 2026-01-10 14:04:54.713796 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-01-10 14:04:54.798970 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-01-10 14:04:54.799094 | orchestrator | 2026-01-10 14:04:54.799120 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-10 14:04:54.799140 | orchestrator | 2026-01-10 14:04:54.799163 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-01-10 14:04:54.859481 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:04:54.859635 | orchestrator | 2026-01-10 14:04:54.859651 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-01-10 14:04:54.941301 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-01-10 14:04:54.941421 | orchestrator | 2026-01-10 14:04:54.941437 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-01-10 14:04:55.723087 | orchestrator | changed: [testbed-manager] 2026-01-10 14:04:55.723216 | orchestrator | 2026-01-10 14:04:55.723234 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-01-10 14:04:59.129981 | orchestrator | ok: [testbed-manager] 2026-01-10 14:04:59.130105 | orchestrator | 2026-01-10 14:04:59.130115 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-01-10 14:04:59.205708 | orchestrator | ok: [testbed-manager] => { 2026-01-10 14:04:59.205792 | orchestrator | "version_check_result.stdout_lines": [ 2026-01-10 14:04:59.205803 | orchestrator | "=== OSISM Container Version Check ===", 2026-01-10 14:04:59.205811 | orchestrator | "Checking running containers against expected versions...", 2026-01-10 14:04:59.205819 | orchestrator | "", 2026-01-10 14:04:59.205826 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-01-10 14:04:59.205834 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-01-10 14:04:59.205842 | orchestrator | " Enabled: true", 2026-01-10 14:04:59.205849 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-01-10 14:04:59.205912 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:04:59.205919 | orchestrator | "", 2026-01-10 14:04:59.205927 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-01-10 14:04:59.205934 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-01-10 14:04:59.205941 | orchestrator | " Enabled: true", 2026-01-10 14:04:59.205947 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-01-10 14:04:59.205954 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:04:59.205960 | orchestrator | "", 2026-01-10 14:04:59.205967 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-01-10 14:04:59.205974 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-01-10 14:04:59.205980 | orchestrator | " Enabled: true", 2026-01-10 14:04:59.205987 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-01-10 14:04:59.205994 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:04:59.206000 | orchestrator | "", 2026-01-10 14:04:59.206007 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-01-10 14:04:59.206054 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-01-10 14:04:59.206062 | orchestrator | " Enabled: true", 2026-01-10 14:04:59.206071 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-01-10 14:04:59.206079 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:04:59.206085 | orchestrator | "", 2026-01-10 14:04:59.206092 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-01-10 14:04:59.206099 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-01-10 14:04:59.206105 | orchestrator | " Enabled: true", 2026-01-10 14:04:59.206112 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-01-10 14:04:59.206118 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:04:59.206125 | orchestrator | "", 2026-01-10 14:04:59.206132 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-01-10 14:04:59.206138 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-10 14:04:59.206145 | orchestrator | " Enabled: true", 2026-01-10 14:04:59.206151 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-10 14:04:59.206158 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:04:59.206165 | orchestrator | "", 2026-01-10 14:04:59.206172 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-01-10 14:04:59.206179 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-10 14:04:59.206185 | orchestrator | " Enabled: true", 2026-01-10 14:04:59.206192 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-10 14:04:59.206199 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:04:59.206205 | orchestrator | "", 2026-01-10 14:04:59.206212 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-01-10 14:04:59.206219 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-10 14:04:59.206225 | orchestrator | " Enabled: true", 2026-01-10 14:04:59.206232 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-10 14:04:59.206238 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:04:59.206245 | orchestrator | "", 2026-01-10 14:04:59.206252 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-01-10 14:04:59.206258 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-01-10 14:04:59.206266 | orchestrator | " Enabled: true", 2026-01-10 14:04:59.206274 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-01-10 14:04:59.206282 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:04:59.206290 | orchestrator | "", 2026-01-10 14:04:59.206298 | orchestrator | "Checking service: redis (Redis Cache)", 2026-01-10 14:04:59.206306 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-10 14:04:59.206314 | orchestrator | " Enabled: true", 2026-01-10 14:04:59.206321 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-10 14:04:59.206335 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:04:59.206342 | orchestrator | "", 2026-01-10 14:04:59.206350 | orchestrator | "Checking service: api (OSISM API Service)", 2026-01-10 14:04:59.206358 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-10 14:04:59.206365 | orchestrator | " Enabled: true", 2026-01-10 14:04:59.206373 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-10 14:04:59.206381 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:04:59.206388 | orchestrator | "", 2026-01-10 14:04:59.206396 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-01-10 14:04:59.206403 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-10 14:04:59.206411 | orchestrator | " Enabled: true", 2026-01-10 14:04:59.206419 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-10 14:04:59.206427 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:04:59.206434 | orchestrator | "", 2026-01-10 14:04:59.206442 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-01-10 14:04:59.206449 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-10 14:04:59.206457 | orchestrator | " Enabled: true", 2026-01-10 14:04:59.206465 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-10 14:04:59.206473 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:04:59.206480 | orchestrator | "", 2026-01-10 14:04:59.206488 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-01-10 14:04:59.206510 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-10 14:04:59.206517 | orchestrator | " Enabled: true", 2026-01-10 14:04:59.206525 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-10 14:04:59.206546 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:04:59.206554 | orchestrator | "", 2026-01-10 14:04:59.206568 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-01-10 14:04:59.206576 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-10 14:04:59.206583 | orchestrator | " Enabled: true", 2026-01-10 14:04:59.206591 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-10 14:04:59.206599 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:04:59.206607 | orchestrator | "", 2026-01-10 14:04:59.206614 | orchestrator | "=== Summary ===", 2026-01-10 14:04:59.206622 | orchestrator | "Errors (version mismatches): 0", 2026-01-10 14:04:59.206628 | orchestrator | "Warnings (expected containers not running): 0", 2026-01-10 14:04:59.206635 | orchestrator | "", 2026-01-10 14:04:59.206642 | orchestrator | "✅ All running containers match expected versions!" 2026-01-10 14:04:59.206649 | orchestrator | ] 2026-01-10 14:04:59.206655 | orchestrator | } 2026-01-10 14:04:59.206662 | orchestrator | 2026-01-10 14:04:59.206669 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-01-10 14:04:59.253052 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:04:59.253144 | orchestrator | 2026-01-10 14:04:59.253160 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:04:59.253174 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2026-01-10 14:04:59.253186 | orchestrator | 2026-01-10 14:04:59.368793 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-10 14:04:59.368891 | orchestrator | + deactivate 2026-01-10 14:04:59.368906 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-01-10 14:04:59.368920 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-10 14:04:59.368931 | orchestrator | + export PATH 2026-01-10 14:04:59.368942 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-01-10 14:04:59.368955 | orchestrator | + '[' -n '' ']' 2026-01-10 14:04:59.368966 | orchestrator | + hash -r 2026-01-10 14:04:59.368977 | orchestrator | + '[' -n '' ']' 2026-01-10 14:04:59.368988 | orchestrator | + unset VIRTUAL_ENV 2026-01-10 14:04:59.368999 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-01-10 14:04:59.369010 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-01-10 14:04:59.369021 | orchestrator | + unset -f deactivate 2026-01-10 14:04:59.369032 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-01-10 14:04:59.378185 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-10 14:04:59.378237 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-10 14:04:59.378252 | orchestrator | + local max_attempts=60 2026-01-10 14:04:59.378267 | orchestrator | + local name=ceph-ansible 2026-01-10 14:04:59.378278 | orchestrator | + local attempt_num=1 2026-01-10 14:04:59.379200 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:04:59.419662 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-10 14:04:59.419740 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-10 14:04:59.419755 | orchestrator | + local max_attempts=60 2026-01-10 14:04:59.419768 | orchestrator | + local name=kolla-ansible 2026-01-10 14:04:59.419779 | orchestrator | + local attempt_num=1 2026-01-10 14:04:59.420260 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-10 14:04:59.456771 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-10 14:04:59.456831 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-10 14:04:59.456863 | orchestrator | + local max_attempts=60 2026-01-10 14:04:59.456875 | orchestrator | + local name=osism-ansible 2026-01-10 14:04:59.456896 | orchestrator | + local attempt_num=1 2026-01-10 14:04:59.457569 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-10 14:04:59.494466 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-10 14:04:59.495193 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-10 14:04:59.495219 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-10 14:05:00.227834 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-01-10 14:05:00.406395 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-01-10 14:05:00.406559 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-01-10 14:05:00.406583 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-01-10 14:05:00.406596 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-01-10 14:05:00.406612 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-01-10 14:05:00.406650 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-01-10 14:05:00.406665 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-01-10 14:05:00.406679 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-01-10 14:05:00.406692 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-01-10 14:05:00.406705 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-01-10 14:05:00.406718 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-01-10 14:05:00.406733 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-01-10 14:05:00.406773 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-01-10 14:05:00.406782 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-01-10 14:05:00.406790 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-01-10 14:05:00.406799 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-01-10 14:05:00.413306 | orchestrator | ++ semver 9.5.0 7.0.0 2026-01-10 14:05:00.468744 | orchestrator | + [[ 1 -ge 0 ]] 2026-01-10 14:05:00.468836 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-01-10 14:05:00.473826 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-01-10 14:05:12.715368 | orchestrator | 2026-01-10 14:05:12 | INFO  | Task c3c25371-b28e-4dcc-aa18-8ca9ca7da2b7 (resolvconf) was prepared for execution. 2026-01-10 14:05:12.715458 | orchestrator | 2026-01-10 14:05:12 | INFO  | It takes a moment until task c3c25371-b28e-4dcc-aa18-8ca9ca7da2b7 (resolvconf) has been started and output is visible here. 2026-01-10 14:05:26.930194 | orchestrator | 2026-01-10 14:05:26.930310 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-01-10 14:05:26.930328 | orchestrator | 2026-01-10 14:05:26.930341 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-10 14:05:26.930353 | orchestrator | Saturday 10 January 2026 14:05:16 +0000 (0:00:00.150) 0:00:00.150 ****** 2026-01-10 14:05:26.930364 | orchestrator | ok: [testbed-manager] 2026-01-10 14:05:26.930377 | orchestrator | 2026-01-10 14:05:26.930388 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-10 14:05:26.930400 | orchestrator | Saturday 10 January 2026 14:05:20 +0000 (0:00:03.811) 0:00:03.961 ****** 2026-01-10 14:05:26.930410 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:05:26.930423 | orchestrator | 2026-01-10 14:05:26.930434 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-10 14:05:26.930445 | orchestrator | Saturday 10 January 2026 14:05:20 +0000 (0:00:00.079) 0:00:04.040 ****** 2026-01-10 14:05:26.930456 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-01-10 14:05:26.930469 | orchestrator | 2026-01-10 14:05:26.930480 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-10 14:05:26.930491 | orchestrator | Saturday 10 January 2026 14:05:20 +0000 (0:00:00.093) 0:00:04.133 ****** 2026-01-10 14:05:26.930594 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-01-10 14:05:26.930612 | orchestrator | 2026-01-10 14:05:26.930623 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-10 14:05:26.930635 | orchestrator | Saturday 10 January 2026 14:05:21 +0000 (0:00:00.099) 0:00:04.233 ****** 2026-01-10 14:05:26.930646 | orchestrator | ok: [testbed-manager] 2026-01-10 14:05:26.930657 | orchestrator | 2026-01-10 14:05:26.930668 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-10 14:05:26.930679 | orchestrator | Saturday 10 January 2026 14:05:22 +0000 (0:00:01.179) 0:00:05.413 ****** 2026-01-10 14:05:26.930689 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:05:26.930700 | orchestrator | 2026-01-10 14:05:26.930711 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-10 14:05:26.930746 | orchestrator | Saturday 10 January 2026 14:05:22 +0000 (0:00:00.059) 0:00:05.472 ****** 2026-01-10 14:05:26.930757 | orchestrator | ok: [testbed-manager] 2026-01-10 14:05:26.930768 | orchestrator | 2026-01-10 14:05:26.930779 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-10 14:05:26.930790 | orchestrator | Saturday 10 January 2026 14:05:22 +0000 (0:00:00.513) 0:00:05.986 ****** 2026-01-10 14:05:26.930800 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:05:26.930811 | orchestrator | 2026-01-10 14:05:26.930822 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-10 14:05:26.930833 | orchestrator | Saturday 10 January 2026 14:05:22 +0000 (0:00:00.071) 0:00:06.057 ****** 2026-01-10 14:05:26.930844 | orchestrator | changed: [testbed-manager] 2026-01-10 14:05:26.930855 | orchestrator | 2026-01-10 14:05:26.930865 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-10 14:05:26.930876 | orchestrator | Saturday 10 January 2026 14:05:23 +0000 (0:00:00.544) 0:00:06.602 ****** 2026-01-10 14:05:26.930887 | orchestrator | changed: [testbed-manager] 2026-01-10 14:05:26.930898 | orchestrator | 2026-01-10 14:05:26.930909 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-10 14:05:26.930919 | orchestrator | Saturday 10 January 2026 14:05:24 +0000 (0:00:01.093) 0:00:07.695 ****** 2026-01-10 14:05:26.930930 | orchestrator | ok: [testbed-manager] 2026-01-10 14:05:26.930941 | orchestrator | 2026-01-10 14:05:26.930951 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-10 14:05:26.930962 | orchestrator | Saturday 10 January 2026 14:05:25 +0000 (0:00:00.978) 0:00:08.673 ****** 2026-01-10 14:05:26.930973 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-01-10 14:05:26.930983 | orchestrator | 2026-01-10 14:05:26.930994 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-10 14:05:26.931005 | orchestrator | Saturday 10 January 2026 14:05:25 +0000 (0:00:00.075) 0:00:08.749 ****** 2026-01-10 14:05:26.931015 | orchestrator | changed: [testbed-manager] 2026-01-10 14:05:26.931026 | orchestrator | 2026-01-10 14:05:26.931037 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:05:26.931048 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-10 14:05:26.931059 | orchestrator | 2026-01-10 14:05:26.931070 | orchestrator | 2026-01-10 14:05:26.931080 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:05:26.931091 | orchestrator | Saturday 10 January 2026 14:05:26 +0000 (0:00:01.155) 0:00:09.905 ****** 2026-01-10 14:05:26.931102 | orchestrator | =============================================================================== 2026-01-10 14:05:26.931112 | orchestrator | Gathering Facts --------------------------------------------------------- 3.81s 2026-01-10 14:05:26.931123 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.18s 2026-01-10 14:05:26.931133 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.16s 2026-01-10 14:05:26.931144 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.09s 2026-01-10 14:05:26.931154 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.98s 2026-01-10 14:05:26.931165 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.54s 2026-01-10 14:05:26.931194 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.51s 2026-01-10 14:05:26.931206 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.10s 2026-01-10 14:05:26.931217 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2026-01-10 14:05:26.931228 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.08s 2026-01-10 14:05:26.931246 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-01-10 14:05:26.931257 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.07s 2026-01-10 14:05:26.931268 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-01-10 14:05:27.239068 | orchestrator | + osism apply sshconfig 2026-01-10 14:05:39.394315 | orchestrator | 2026-01-10 14:05:39 | INFO  | Task 669b722e-c4d7-4f33-add6-fcd6c608bc1f (sshconfig) was prepared for execution. 2026-01-10 14:05:39.394446 | orchestrator | 2026-01-10 14:05:39 | INFO  | It takes a moment until task 669b722e-c4d7-4f33-add6-fcd6c608bc1f (sshconfig) has been started and output is visible here. 2026-01-10 14:05:51.531887 | orchestrator | 2026-01-10 14:05:51.532045 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-01-10 14:05:51.532063 | orchestrator | 2026-01-10 14:05:51.532102 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-01-10 14:05:51.532115 | orchestrator | Saturday 10 January 2026 14:05:43 +0000 (0:00:00.180) 0:00:00.180 ****** 2026-01-10 14:05:51.532126 | orchestrator | ok: [testbed-manager] 2026-01-10 14:05:51.532139 | orchestrator | 2026-01-10 14:05:51.532150 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-01-10 14:05:51.532162 | orchestrator | Saturday 10 January 2026 14:05:44 +0000 (0:00:00.566) 0:00:00.747 ****** 2026-01-10 14:05:51.532173 | orchestrator | changed: [testbed-manager] 2026-01-10 14:05:51.532185 | orchestrator | 2026-01-10 14:05:51.532196 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-01-10 14:05:51.532207 | orchestrator | Saturday 10 January 2026 14:05:44 +0000 (0:00:00.517) 0:00:01.264 ****** 2026-01-10 14:05:51.532218 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-01-10 14:05:51.532229 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-01-10 14:05:51.532241 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-01-10 14:05:51.532251 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-01-10 14:05:51.532262 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-01-10 14:05:51.532273 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-01-10 14:05:51.532284 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-01-10 14:05:51.532295 | orchestrator | 2026-01-10 14:05:51.532305 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-01-10 14:05:51.532317 | orchestrator | Saturday 10 January 2026 14:05:50 +0000 (0:00:05.867) 0:00:07.131 ****** 2026-01-10 14:05:51.532327 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:05:51.532338 | orchestrator | 2026-01-10 14:05:51.532349 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-01-10 14:05:51.532360 | orchestrator | Saturday 10 January 2026 14:05:50 +0000 (0:00:00.075) 0:00:07.206 ****** 2026-01-10 14:05:51.532371 | orchestrator | changed: [testbed-manager] 2026-01-10 14:05:51.532382 | orchestrator | 2026-01-10 14:05:51.532393 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:05:51.532405 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:05:51.532417 | orchestrator | 2026-01-10 14:05:51.532428 | orchestrator | 2026-01-10 14:05:51.532439 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:05:51.532450 | orchestrator | Saturday 10 January 2026 14:05:51 +0000 (0:00:00.559) 0:00:07.766 ****** 2026-01-10 14:05:51.532461 | orchestrator | =============================================================================== 2026-01-10 14:05:51.532472 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.87s 2026-01-10 14:05:51.532482 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.57s 2026-01-10 14:05:51.532493 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.56s 2026-01-10 14:05:51.532563 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.52s 2026-01-10 14:05:51.532577 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2026-01-10 14:05:51.818674 | orchestrator | + osism apply known-hosts 2026-01-10 14:06:03.857886 | orchestrator | 2026-01-10 14:06:03 | INFO  | Task 9771c4d2-d024-407b-aede-f5de7f294162 (known-hosts) was prepared for execution. 2026-01-10 14:06:03.858071 | orchestrator | 2026-01-10 14:06:03 | INFO  | It takes a moment until task 9771c4d2-d024-407b-aede-f5de7f294162 (known-hosts) has been started and output is visible here. 2026-01-10 14:06:20.744389 | orchestrator | 2026-01-10 14:06:20.744607 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-01-10 14:06:20.744627 | orchestrator | 2026-01-10 14:06:20.744639 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-01-10 14:06:20.744653 | orchestrator | Saturday 10 January 2026 14:06:07 +0000 (0:00:00.163) 0:00:00.163 ****** 2026-01-10 14:06:20.744665 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-10 14:06:20.744678 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-10 14:06:20.744689 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-10 14:06:20.744700 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-10 14:06:20.744711 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-10 14:06:20.744722 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-10 14:06:20.744733 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-10 14:06:20.744744 | orchestrator | 2026-01-10 14:06:20.744755 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-01-10 14:06:20.744767 | orchestrator | Saturday 10 January 2026 14:06:13 +0000 (0:00:06.055) 0:00:06.219 ****** 2026-01-10 14:06:20.744780 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-10 14:06:20.744794 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-10 14:06:20.744805 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-10 14:06:20.744816 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-10 14:06:20.744827 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-10 14:06:20.744851 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-10 14:06:20.744862 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-10 14:06:20.744873 | orchestrator | 2026-01-10 14:06:20.744884 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:06:20.744895 | orchestrator | Saturday 10 January 2026 14:06:14 +0000 (0:00:00.165) 0:00:06.384 ****** 2026-01-10 14:06:20.744907 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLMbra23lVWw5J+ykGfRG58C57UbhmGZez5nFs4zEshcJLqV3wL3UwpX9bIIIE6Ez9YNjEMsiqJANtM+OsEDcdU=) 2026-01-10 14:06:20.744929 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC10073uocsRju8ryOqQf9z/nvxIn9B3G+YOGMh75NhF+J0I/quFMteW3kJiaIJkjrC9Smg21tBrbQ83u3kUqKfvjIxVDD+m4mmQ/tExOLvsyoBLFVHcj/atsVjE0zURU0lCQQ/7U77DY92VoE/kZAA/rkp4ifNZcvGFE2ZQ4hV/Wm2pLZ9IhDUOJP/gIpgZo5QOi+SY8MoU8fsRei5yTjS8+otZvVJhy/XY/78dcXt3AdKdnIlQFFc3ui2FYPg43zXzTW8JEOGUwK6L1cTAGSymUl/GCRCXhUZaN0VP132124rfdcf75o1rklrF3udsexfeRPyl2kk/0ylPIu/oOzby6isttvB8Di6pmLsp7lTxDnpY4WlKQPQKpvmqZw9wGxZb8UAU1N5kUEVRVIIvWIthRlxoSDptYq6O4sixUYrklWAjxrnl2drLV01BqtZ/YelrQgzorzyPt60lYOyFdfs6ly4s83oZwkMFRm0gn/YvOz1KZ8ttEl4+xfpag8d2hk=) 2026-01-10 14:06:20.744966 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMoIBoXLPAUKHTRX66wgMJHCwVOFAT5H/8pBICiDSPiN) 2026-01-10 14:06:20.744979 | orchestrator | 2026-01-10 14:06:20.744990 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:06:20.745001 | orchestrator | Saturday 10 January 2026 14:06:15 +0000 (0:00:01.189) 0:00:07.573 ****** 2026-01-10 14:06:20.745012 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHv6nj+0x7Yn1aIb+oRf5Dqwzh11lA+KHq7LCvXjBS2tNdex8d3gsgUXbC54o0Led/tbCXyESMVgl8dXK3MYlxM=) 2026-01-10 14:06:20.745055 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrfJuVqNoH/tZT8q8LEUKEmavgsbhJEn5JvNDHLJMlRWwbOgcGRzqSbSW3LF0H2wUOYilW26GCvzNYHrybCK7JDcImmRDrg77is/w16bQqHUtNQSu2EgVhIu7Gj1xCSEwGQ0JSz67B0ADr9Lr/PATXDLVVpD/PrX3C9sC7ViJ4jvEEXTe8O5Q6b7raS78iphZoGYnVURYrq7kPQJ2iy6+gkOiL2anU/48mnKpA+z9MkyGL9xP8rdqLEh1OuVVW8b53FEUYB0/bmqpxxKMN+OOGNfInX/3VF+7JgsSva0TWPRGWAf6flsiSidhmT7/+Vq89PHTGNP1qdz07PlXBh6c4BHbAP2Rti6aKfxRiDpd8XPWirU27GkJJgcSUjX7AayEY4R6gGGFdFiyuv+1GTwo3Ba/GQYnGTYlTnm770n81VXFYeIVUKYfQ4D2+9mz+7gpZsyc8YZnmuchaTid8wZ1GdcZrCBlpotJfpskI1ln6189AvDLEUd3fb2WOdLtxWyU=) 2026-01-10 14:06:20.745068 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF6daoAsIs4RWgw5nZ5ysrZh6HktjHry2nF2V1ao0sIT) 2026-01-10 14:06:20.745079 | orchestrator | 2026-01-10 14:06:20.745090 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:06:20.745102 | orchestrator | Saturday 10 January 2026 14:06:16 +0000 (0:00:01.071) 0:00:08.644 ****** 2026-01-10 14:06:20.745113 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDicf5zRK4Qjrnti5cWRDO9IkKsF0alvLOjYNjfBeWeexEt/j2G1rRbz/KDNeU1JupyLtLcU/EY2SUDk4LRz302B6zQVRR2CrpQkxfZfV2R7n4b1iIy0J0qIItjRDF8eyfXpl2UFp/4IsZBtqkFbKm0WkhjrcGm6c0jqJLN5DHRQ1Y3eZqmHm3M0Xj/u5XcGsv6H7iru7KNMFoqQpmGBqiHXoWrAuPncEz5kxqMzHtBf4Gi0ZBUxC5OUCD24gKdGwun/qJ/2d6lbTKAFibDT03tnPgD0jlT7zpJ1+YLVqEKmQPZ11neQZMfIOEvIlFt+uQ7v0Fq+SrSTpj4cPXB5DqttBlG1SXagh3p8AUxJIkeH0SIFHRJXkCC1DdmEZoy4xeH6e4ZRD9BsdrtP0NOb1y/n21PA4xolAWBCoeW6oauEDqQfwTKGL4c4+rOXyQrrmKECMBAbVGITXEBMImqTzD9SJOtewmZTBi8DaIYJDIlAY8jDIAJg8RPwpMyTmimH78=) 2026-01-10 14:06:20.745125 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOLXtcL2pm1XNNiQp4zJSJq60hRvxXDpqLKyVuqOhk2eUWloRlAAnhrJVzpuGvtucslWghFwtp4lKNQe/esiExE=) 2026-01-10 14:06:20.745136 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOBiYFziPzaXfKLPN8aFFXe0riKkxy7+m2dGbEhc6NT6) 2026-01-10 14:06:20.745147 | orchestrator | 2026-01-10 14:06:20.745158 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:06:20.745169 | orchestrator | Saturday 10 January 2026 14:06:17 +0000 (0:00:01.063) 0:00:09.708 ****** 2026-01-10 14:06:20.745180 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIOEpWFRGk9xO0/8elzrXnGg2UAjXA+QEQ4wtmVqDfMDaxGqWjTsbVAXjZRrGwM3hjOfxcwyYEx+4jOXQghIzTV3kT34wqvaTQG011lpmquWBSpGo2+9YePfSqY+Rvj9Sq2/s7P/MyrBdW32n2mcHxxDRMN+7l+k8TwW+um5j8IDiVQN03kh52j8jTo5uyc+iedL+3RcpiLXNNuJCLQloSKrEvp7KXbU9bCRfvUwKN3Mf4pV7aqqtO07v0e5oACuRzlD2dfpp3yMLH+L8UlT3eTSrxHBmgS14Vq2V+4YqloA3lXQOzzbCS5gl4l/fvDOWDiTJIb2p2ZLJzcSgt4YH5iNhSudRA1wl64jDH8LvLiE8npE1R70qoz+3ZXp8WbjeBFEE4j2eGe3bEbdvsGjGswwTp1f312CwsJ2tU3/KbqmdBFfGMJIerYZHjS3yzJnnnHKxp6jhtK12A+eI4SKlTs5Xy1x8/yXd7suvQh/Ql5PjRMqLeF4t1P/pNZOYhnR8=) 2026-01-10 14:06:20.745199 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE/nhAr9lEV7nIRvpSuHidj4I/4VKPvoIz1QhjDznDIvUAUxEzh/c7IQtt3nKAv8Z82XIT20XOGzqqP0dQmbbJE=) 2026-01-10 14:06:20.745211 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHLqHtx2TI+bCii97o1vmUjISE4CAqvy8wVcOWnfJtP7) 2026-01-10 14:06:20.745222 | orchestrator | 2026-01-10 14:06:20.745232 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:06:20.745243 | orchestrator | Saturday 10 January 2026 14:06:18 +0000 (0:00:01.068) 0:00:10.776 ****** 2026-01-10 14:06:20.745336 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnSI3cwGJkE9t18OH3G5e17bmPiZJugQo6EJGQzw2Soth4HmOk0z8LSqScLydsohm1IjRQIAUlilxMY9ALXALHD4mHfGoyFWYKECeOjTBdIDnIE6WqdTEOu6yF6RmOVeYeoJPXFvZRgZ6M17vYm39bc3KanT9fQkCFjVXnbV5fDJo55Uwpec/pf7TH5jDS2m+Koi6JOLhgvnEVRp1EGwW1/55mKXd7ZPqxGvx6piC03SH8AquqIP9AEorMFjT+hLpuiUv0axyB1Yk2QLGmwg2ShZc4ObRWVDfyLvDR0SW93sCR94FPkovEji0wk1183N5Ma2y2WbrPWiLONrLz5jlNy6pV1dynZAHDLpwkzkNPvGoXLLgdYaazabjnx4yPGdHtUF+Uw/KT0okqlW1djtwenbQ3I7nzs3iFfpMvH8SpE7PphtbnSZa/StmPigNMfdcSZfcmYPdcf2EH2PBgHL6BxRtMa0WMzUbq2/6KHNe/8UtJGVb73iF4bxmnVuITdzs=) 2026-01-10 14:06:20.745348 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFb1lbgRWd2WXCfO2LVWGptYVjywyiu4kEmJorOhpQFmVbajIU+nd8G0VSvGnG/NITnM6c0FWmFNtDOIaCTs1uo=) 2026-01-10 14:06:20.745359 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAcQ9doEewBrdmCL1mMO1EyiaQIJ9vf6sgu6Ww5/R9rG) 2026-01-10 14:06:20.745370 | orchestrator | 2026-01-10 14:06:20.745381 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:06:20.745392 | orchestrator | Saturday 10 January 2026 14:06:19 +0000 (0:00:01.093) 0:00:11.870 ****** 2026-01-10 14:06:20.745412 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDv8eWrLlPQx1ErYLP2G9Vp5lS/WqezqQtGeHNrwc7pj/KwsOIkJ2dER15pX0XGQ6uBktjmhPxpPJ41hCQllqzBpngBMcwd+pQ6pzX3A4YWD/2Sv/B4nTPnd4/+yBGaEftDD071RxcFBK6gwsNNi2J8qMkou7xYahlsQIPtbX+IC9ze1atA0LHYWcT+nFRpG//bNsokAlRSAyuuVfuzZGqzVjMiPO6xBUV1rCNKdprjC+AxMl+Bc45xLM4cvf3seMhJ/mX2ndJexw2/LHJQccTLVepWOrspwp4bAUtb4bkjWp0BHpR8MhSSpAnvGXs1sweo4u27inKnMBbiJmvW0GCTbEkDGg6mimdOYpM+NRLmWR9F09GDjtSMuUTIx1K5SeTGoF+PkFaqSUtv+yxgU3BP2TvajnChL2XaaeYwJnCxF/a/1DPQRfAI/YHLYzRKyupNMkeEMbm2U2glnpOtt8QQ3Fz0+hDZ26+SqAZMIv9Hw/nGItSMofNdaSzVDvJccJM=) 2026-01-10 14:06:31.725757 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGAFFe06dyXfOVZ57KRjv7d3PkR2XxSSJvYCjHv/HAeRtFxbh3HMhhBHHrsL6qZ5hi5w8PMxK89yDG5IzNT4Qn8=) 2026-01-10 14:06:31.725875 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGlUX300lvyGiiZdgkfCYmshU76w9+27YMNgj7iNg2G3) 2026-01-10 14:06:31.725892 | orchestrator | 2026-01-10 14:06:31.725905 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:06:31.725918 | orchestrator | Saturday 10 January 2026 14:06:20 +0000 (0:00:01.086) 0:00:12.956 ****** 2026-01-10 14:06:31.725932 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCl+044DPWsYidmx2fbPKUY4e/tdYE7T68Cd6qYIXsr3kPU4RkNczEO+a67qlJiVT49OpTcQWkwJgdNfUqNJxpI/hojHAY/+yP59xEmrPcLc97EUDfMztWfx/EKrpdpsMilDk+pXQaJlFhImPms4lmJHEo6QpBcCpkR8wfE0lDdKY6I9qkscGsBUDZgil3WzlxEhbySyjoIQrsXQNdeCXnkvGH4RLEh2sEC+QpqXfAvwXubAR+tx8+j+p7EtHCL/j6OmnjdXAtQWueG3xtBHI+oyxQFCZj/4EdR5stDFnWt2NrGa67rrXjVoUqlNmPO+qTBT7IW2h2Lh9ojSBl/8OMmQl3JOkrTnHzncrlnQOwch0qQoGvKYnpo5OJUBd3BjXUSYgwW/5GynzsPVyeSM5YWVaRxiDA3Fq8CkMJsS15Q3yb84+60oeNQSUu9tHZ2Vd63rGQST/9DRnzY7dI195wPcHSfU/lgcOfUKuK4+RYhxCYkBxwnU3UsWyoURBV8BzM=) 2026-01-10 14:06:31.725968 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJQywKKwpvmVhaYSLtciVWWqvUaiAP6Qu2/6chdjZlthfgACvgxEAZinUJAzq3X/s0d/UFDh3oPN461qyoX6Lmg=) 2026-01-10 14:06:31.725980 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAZt5MvioXCcBoqUuzB5F1FS4D4XLLo8GATw2xiaH+fD) 2026-01-10 14:06:31.725991 | orchestrator | 2026-01-10 14:06:31.726002 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-01-10 14:06:31.726065 | orchestrator | Saturday 10 January 2026 14:06:21 +0000 (0:00:01.032) 0:00:13.988 ****** 2026-01-10 14:06:31.726079 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-10 14:06:31.726091 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-10 14:06:31.726102 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-10 14:06:31.726113 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-10 14:06:31.726123 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-10 14:06:31.726134 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-10 14:06:31.726145 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-10 14:06:31.726156 | orchestrator | 2026-01-10 14:06:31.726167 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-01-10 14:06:31.726179 | orchestrator | Saturday 10 January 2026 14:06:27 +0000 (0:00:05.414) 0:00:19.403 ****** 2026-01-10 14:06:31.726191 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-10 14:06:31.726204 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-10 14:06:31.726215 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-10 14:06:31.726227 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-10 14:06:31.726238 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-10 14:06:31.726248 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-10 14:06:31.726259 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-10 14:06:31.726270 | orchestrator | 2026-01-10 14:06:31.726282 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:06:31.726295 | orchestrator | Saturday 10 January 2026 14:06:27 +0000 (0:00:00.174) 0:00:19.577 ****** 2026-01-10 14:06:31.726342 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC10073uocsRju8ryOqQf9z/nvxIn9B3G+YOGMh75NhF+J0I/quFMteW3kJiaIJkjrC9Smg21tBrbQ83u3kUqKfvjIxVDD+m4mmQ/tExOLvsyoBLFVHcj/atsVjE0zURU0lCQQ/7U77DY92VoE/kZAA/rkp4ifNZcvGFE2ZQ4hV/Wm2pLZ9IhDUOJP/gIpgZo5QOi+SY8MoU8fsRei5yTjS8+otZvVJhy/XY/78dcXt3AdKdnIlQFFc3ui2FYPg43zXzTW8JEOGUwK6L1cTAGSymUl/GCRCXhUZaN0VP132124rfdcf75o1rklrF3udsexfeRPyl2kk/0ylPIu/oOzby6isttvB8Di6pmLsp7lTxDnpY4WlKQPQKpvmqZw9wGxZb8UAU1N5kUEVRVIIvWIthRlxoSDptYq6O4sixUYrklWAjxrnl2drLV01BqtZ/YelrQgzorzyPt60lYOyFdfs6ly4s83oZwkMFRm0gn/YvOz1KZ8ttEl4+xfpag8d2hk=) 2026-01-10 14:06:31.726368 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLMbra23lVWw5J+ykGfRG58C57UbhmGZez5nFs4zEshcJLqV3wL3UwpX9bIIIE6Ez9YNjEMsiqJANtM+OsEDcdU=) 2026-01-10 14:06:31.726393 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMoIBoXLPAUKHTRX66wgMJHCwVOFAT5H/8pBICiDSPiN) 2026-01-10 14:06:31.726406 | orchestrator | 2026-01-10 14:06:31.726419 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:06:31.726436 | orchestrator | Saturday 10 January 2026 14:06:28 +0000 (0:00:01.096) 0:00:20.674 ****** 2026-01-10 14:06:31.726449 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF6daoAsIs4RWgw5nZ5ysrZh6HktjHry2nF2V1ao0sIT) 2026-01-10 14:06:31.726462 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrfJuVqNoH/tZT8q8LEUKEmavgsbhJEn5JvNDHLJMlRWwbOgcGRzqSbSW3LF0H2wUOYilW26GCvzNYHrybCK7JDcImmRDrg77is/w16bQqHUtNQSu2EgVhIu7Gj1xCSEwGQ0JSz67B0ADr9Lr/PATXDLVVpD/PrX3C9sC7ViJ4jvEEXTe8O5Q6b7raS78iphZoGYnVURYrq7kPQJ2iy6+gkOiL2anU/48mnKpA+z9MkyGL9xP8rdqLEh1OuVVW8b53FEUYB0/bmqpxxKMN+OOGNfInX/3VF+7JgsSva0TWPRGWAf6flsiSidhmT7/+Vq89PHTGNP1qdz07PlXBh6c4BHbAP2Rti6aKfxRiDpd8XPWirU27GkJJgcSUjX7AayEY4R6gGGFdFiyuv+1GTwo3Ba/GQYnGTYlTnm770n81VXFYeIVUKYfQ4D2+9mz+7gpZsyc8YZnmuchaTid8wZ1GdcZrCBlpotJfpskI1ln6189AvDLEUd3fb2WOdLtxWyU=) 2026-01-10 14:06:31.726475 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHv6nj+0x7Yn1aIb+oRf5Dqwzh11lA+KHq7LCvXjBS2tNdex8d3gsgUXbC54o0Led/tbCXyESMVgl8dXK3MYlxM=) 2026-01-10 14:06:31.726487 | orchestrator | 2026-01-10 14:06:31.726500 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:06:31.726513 | orchestrator | Saturday 10 January 2026 14:06:29 +0000 (0:00:01.089) 0:00:21.764 ****** 2026-01-10 14:06:31.726569 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOBiYFziPzaXfKLPN8aFFXe0riKkxy7+m2dGbEhc6NT6) 2026-01-10 14:06:31.726583 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDicf5zRK4Qjrnti5cWRDO9IkKsF0alvLOjYNjfBeWeexEt/j2G1rRbz/KDNeU1JupyLtLcU/EY2SUDk4LRz302B6zQVRR2CrpQkxfZfV2R7n4b1iIy0J0qIItjRDF8eyfXpl2UFp/4IsZBtqkFbKm0WkhjrcGm6c0jqJLN5DHRQ1Y3eZqmHm3M0Xj/u5XcGsv6H7iru7KNMFoqQpmGBqiHXoWrAuPncEz5kxqMzHtBf4Gi0ZBUxC5OUCD24gKdGwun/qJ/2d6lbTKAFibDT03tnPgD0jlT7zpJ1+YLVqEKmQPZ11neQZMfIOEvIlFt+uQ7v0Fq+SrSTpj4cPXB5DqttBlG1SXagh3p8AUxJIkeH0SIFHRJXkCC1DdmEZoy4xeH6e4ZRD9BsdrtP0NOb1y/n21PA4xolAWBCoeW6oauEDqQfwTKGL4c4+rOXyQrrmKECMBAbVGITXEBMImqTzD9SJOtewmZTBi8DaIYJDIlAY8jDIAJg8RPwpMyTmimH78=) 2026-01-10 14:06:31.726597 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOLXtcL2pm1XNNiQp4zJSJq60hRvxXDpqLKyVuqOhk2eUWloRlAAnhrJVzpuGvtucslWghFwtp4lKNQe/esiExE=) 2026-01-10 14:06:31.726609 | orchestrator | 2026-01-10 14:06:31.726620 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:06:31.726631 | orchestrator | Saturday 10 January 2026 14:06:30 +0000 (0:00:01.115) 0:00:22.880 ****** 2026-01-10 14:06:31.726642 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHLqHtx2TI+bCii97o1vmUjISE4CAqvy8wVcOWnfJtP7) 2026-01-10 14:06:31.726653 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIOEpWFRGk9xO0/8elzrXnGg2UAjXA+QEQ4wtmVqDfMDaxGqWjTsbVAXjZRrGwM3hjOfxcwyYEx+4jOXQghIzTV3kT34wqvaTQG011lpmquWBSpGo2+9YePfSqY+Rvj9Sq2/s7P/MyrBdW32n2mcHxxDRMN+7l+k8TwW+um5j8IDiVQN03kh52j8jTo5uyc+iedL+3RcpiLXNNuJCLQloSKrEvp7KXbU9bCRfvUwKN3Mf4pV7aqqtO07v0e5oACuRzlD2dfpp3yMLH+L8UlT3eTSrxHBmgS14Vq2V+4YqloA3lXQOzzbCS5gl4l/fvDOWDiTJIb2p2ZLJzcSgt4YH5iNhSudRA1wl64jDH8LvLiE8npE1R70qoz+3ZXp8WbjeBFEE4j2eGe3bEbdvsGjGswwTp1f312CwsJ2tU3/KbqmdBFfGMJIerYZHjS3yzJnnnHKxp6jhtK12A+eI4SKlTs5Xy1x8/yXd7suvQh/Ql5PjRMqLeF4t1P/pNZOYhnR8=) 2026-01-10 14:06:31.726699 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE/nhAr9lEV7nIRvpSuHidj4I/4VKPvoIz1QhjDznDIvUAUxEzh/c7IQtt3nKAv8Z82XIT20XOGzqqP0dQmbbJE=) 2026-01-10 14:06:36.156946 | orchestrator | 2026-01-10 14:06:36.157064 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:06:36.157086 | orchestrator | Saturday 10 January 2026 14:06:31 +0000 (0:00:01.056) 0:00:23.936 ****** 2026-01-10 14:06:36.157102 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFb1lbgRWd2WXCfO2LVWGptYVjywyiu4kEmJorOhpQFmVbajIU+nd8G0VSvGnG/NITnM6c0FWmFNtDOIaCTs1uo=) 2026-01-10 14:06:36.157122 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnSI3cwGJkE9t18OH3G5e17bmPiZJugQo6EJGQzw2Soth4HmOk0z8LSqScLydsohm1IjRQIAUlilxMY9ALXALHD4mHfGoyFWYKECeOjTBdIDnIE6WqdTEOu6yF6RmOVeYeoJPXFvZRgZ6M17vYm39bc3KanT9fQkCFjVXnbV5fDJo55Uwpec/pf7TH5jDS2m+Koi6JOLhgvnEVRp1EGwW1/55mKXd7ZPqxGvx6piC03SH8AquqIP9AEorMFjT+hLpuiUv0axyB1Yk2QLGmwg2ShZc4ObRWVDfyLvDR0SW93sCR94FPkovEji0wk1183N5Ma2y2WbrPWiLONrLz5jlNy6pV1dynZAHDLpwkzkNPvGoXLLgdYaazabjnx4yPGdHtUF+Uw/KT0okqlW1djtwenbQ3I7nzs3iFfpMvH8SpE7PphtbnSZa/StmPigNMfdcSZfcmYPdcf2EH2PBgHL6BxRtMa0WMzUbq2/6KHNe/8UtJGVb73iF4bxmnVuITdzs=) 2026-01-10 14:06:36.157141 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAcQ9doEewBrdmCL1mMO1EyiaQIJ9vf6sgu6Ww5/R9rG) 2026-01-10 14:06:36.157157 | orchestrator | 2026-01-10 14:06:36.157170 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:06:36.157182 | orchestrator | Saturday 10 January 2026 14:06:32 +0000 (0:00:01.043) 0:00:24.979 ****** 2026-01-10 14:06:36.157196 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGlUX300lvyGiiZdgkfCYmshU76w9+27YMNgj7iNg2G3) 2026-01-10 14:06:36.157210 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDv8eWrLlPQx1ErYLP2G9Vp5lS/WqezqQtGeHNrwc7pj/KwsOIkJ2dER15pX0XGQ6uBktjmhPxpPJ41hCQllqzBpngBMcwd+pQ6pzX3A4YWD/2Sv/B4nTPnd4/+yBGaEftDD071RxcFBK6gwsNNi2J8qMkou7xYahlsQIPtbX+IC9ze1atA0LHYWcT+nFRpG//bNsokAlRSAyuuVfuzZGqzVjMiPO6xBUV1rCNKdprjC+AxMl+Bc45xLM4cvf3seMhJ/mX2ndJexw2/LHJQccTLVepWOrspwp4bAUtb4bkjWp0BHpR8MhSSpAnvGXs1sweo4u27inKnMBbiJmvW0GCTbEkDGg6mimdOYpM+NRLmWR9F09GDjtSMuUTIx1K5SeTGoF+PkFaqSUtv+yxgU3BP2TvajnChL2XaaeYwJnCxF/a/1DPQRfAI/YHLYzRKyupNMkeEMbm2U2glnpOtt8QQ3Fz0+hDZ26+SqAZMIv9Hw/nGItSMofNdaSzVDvJccJM=) 2026-01-10 14:06:36.157224 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGAFFe06dyXfOVZ57KRjv7d3PkR2XxSSJvYCjHv/HAeRtFxbh3HMhhBHHrsL6qZ5hi5w8PMxK89yDG5IzNT4Qn8=) 2026-01-10 14:06:36.157238 | orchestrator | 2026-01-10 14:06:36.157252 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:06:36.157264 | orchestrator | Saturday 10 January 2026 14:06:33 +0000 (0:00:01.057) 0:00:26.036 ****** 2026-01-10 14:06:36.157277 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJQywKKwpvmVhaYSLtciVWWqvUaiAP6Qu2/6chdjZlthfgACvgxEAZinUJAzq3X/s0d/UFDh3oPN461qyoX6Lmg=) 2026-01-10 14:06:36.157313 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCl+044DPWsYidmx2fbPKUY4e/tdYE7T68Cd6qYIXsr3kPU4RkNczEO+a67qlJiVT49OpTcQWkwJgdNfUqNJxpI/hojHAY/+yP59xEmrPcLc97EUDfMztWfx/EKrpdpsMilDk+pXQaJlFhImPms4lmJHEo6QpBcCpkR8wfE0lDdKY6I9qkscGsBUDZgil3WzlxEhbySyjoIQrsXQNdeCXnkvGH4RLEh2sEC+QpqXfAvwXubAR+tx8+j+p7EtHCL/j6OmnjdXAtQWueG3xtBHI+oyxQFCZj/4EdR5stDFnWt2NrGa67rrXjVoUqlNmPO+qTBT7IW2h2Lh9ojSBl/8OMmQl3JOkrTnHzncrlnQOwch0qQoGvKYnpo5OJUBd3BjXUSYgwW/5GynzsPVyeSM5YWVaRxiDA3Fq8CkMJsS15Q3yb84+60oeNQSUu9tHZ2Vd63rGQST/9DRnzY7dI195wPcHSfU/lgcOfUKuK4+RYhxCYkBxwnU3UsWyoURBV8BzM=) 2026-01-10 14:06:36.157329 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAZt5MvioXCcBoqUuzB5F1FS4D4XLLo8GATw2xiaH+fD) 2026-01-10 14:06:36.157366 | orchestrator | 2026-01-10 14:06:36.157381 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-01-10 14:06:36.157394 | orchestrator | Saturday 10 January 2026 14:06:34 +0000 (0:00:01.091) 0:00:27.128 ****** 2026-01-10 14:06:36.157409 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-10 14:06:36.157423 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-10 14:06:36.157436 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-10 14:06:36.157448 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-10 14:06:36.157461 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-10 14:06:36.157474 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-10 14:06:36.157488 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-10 14:06:36.157504 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:06:36.157543 | orchestrator | 2026-01-10 14:06:36.157579 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-01-10 14:06:36.157594 | orchestrator | Saturday 10 January 2026 14:06:35 +0000 (0:00:00.165) 0:00:27.294 ****** 2026-01-10 14:06:36.157608 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:06:36.157622 | orchestrator | 2026-01-10 14:06:36.157636 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-01-10 14:06:36.157650 | orchestrator | Saturday 10 January 2026 14:06:35 +0000 (0:00:00.056) 0:00:27.351 ****** 2026-01-10 14:06:36.157663 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:06:36.157677 | orchestrator | 2026-01-10 14:06:36.157690 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-01-10 14:06:36.157704 | orchestrator | Saturday 10 January 2026 14:06:35 +0000 (0:00:00.055) 0:00:27.406 ****** 2026-01-10 14:06:36.157716 | orchestrator | changed: [testbed-manager] 2026-01-10 14:06:36.157729 | orchestrator | 2026-01-10 14:06:36.157740 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:06:36.157764 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-10 14:06:36.157778 | orchestrator | 2026-01-10 14:06:36.157791 | orchestrator | 2026-01-10 14:06:36.157803 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:06:36.157815 | orchestrator | Saturday 10 January 2026 14:06:35 +0000 (0:00:00.742) 0:00:28.148 ****** 2026-01-10 14:06:36.157829 | orchestrator | =============================================================================== 2026-01-10 14:06:36.157842 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.06s 2026-01-10 14:06:36.157855 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.41s 2026-01-10 14:06:36.157870 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2026-01-10 14:06:36.157883 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-01-10 14:06:36.157896 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-01-10 14:06:36.157909 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-01-10 14:06:36.157922 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-01-10 14:06:36.157934 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-01-10 14:06:36.157947 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-01-10 14:06:36.157960 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-01-10 14:06:36.157974 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-01-10 14:06:36.157988 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-01-10 14:06:36.158078 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-01-10 14:06:36.158095 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-01-10 14:06:36.158109 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-01-10 14:06:36.158122 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-01-10 14:06:36.158136 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.74s 2026-01-10 14:06:36.158150 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2026-01-10 14:06:36.158163 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2026-01-10 14:06:36.158176 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2026-01-10 14:06:36.479681 | orchestrator | + osism apply squid 2026-01-10 14:06:48.549424 | orchestrator | 2026-01-10 14:06:48 | INFO  | Task 408dabc9-b4ba-43c6-8502-e3a5ec0e487b (squid) was prepared for execution. 2026-01-10 14:06:48.549629 | orchestrator | 2026-01-10 14:06:48 | INFO  | It takes a moment until task 408dabc9-b4ba-43c6-8502-e3a5ec0e487b (squid) has been started and output is visible here. 2026-01-10 14:08:52.791610 | orchestrator | 2026-01-10 14:08:52.791775 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-01-10 14:08:52.791795 | orchestrator | 2026-01-10 14:08:52.791808 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-01-10 14:08:52.791820 | orchestrator | Saturday 10 January 2026 14:06:52 +0000 (0:00:00.178) 0:00:00.178 ****** 2026-01-10 14:08:52.791832 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-01-10 14:08:52.791845 | orchestrator | 2026-01-10 14:08:52.791856 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-01-10 14:08:52.791867 | orchestrator | Saturday 10 January 2026 14:06:52 +0000 (0:00:00.091) 0:00:00.270 ****** 2026-01-10 14:08:52.791879 | orchestrator | ok: [testbed-manager] 2026-01-10 14:08:52.791891 | orchestrator | 2026-01-10 14:08:52.791902 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-01-10 14:08:52.791913 | orchestrator | Saturday 10 January 2026 14:06:54 +0000 (0:00:01.501) 0:00:01.772 ****** 2026-01-10 14:08:52.791925 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-01-10 14:08:52.791936 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-01-10 14:08:52.791947 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-01-10 14:08:52.791958 | orchestrator | 2026-01-10 14:08:52.791968 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-01-10 14:08:52.791979 | orchestrator | Saturday 10 January 2026 14:06:55 +0000 (0:00:01.167) 0:00:02.939 ****** 2026-01-10 14:08:52.791990 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-01-10 14:08:52.792001 | orchestrator | 2026-01-10 14:08:52.792012 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-01-10 14:08:52.792023 | orchestrator | Saturday 10 January 2026 14:06:56 +0000 (0:00:01.039) 0:00:03.979 ****** 2026-01-10 14:08:52.792033 | orchestrator | ok: [testbed-manager] 2026-01-10 14:08:52.792045 | orchestrator | 2026-01-10 14:08:52.792058 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-01-10 14:08:52.792070 | orchestrator | Saturday 10 January 2026 14:06:56 +0000 (0:00:00.339) 0:00:04.318 ****** 2026-01-10 14:08:52.792082 | orchestrator | changed: [testbed-manager] 2026-01-10 14:08:52.792095 | orchestrator | 2026-01-10 14:08:52.792112 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-01-10 14:08:52.792126 | orchestrator | Saturday 10 January 2026 14:06:57 +0000 (0:00:00.924) 0:00:05.243 ****** 2026-01-10 14:08:52.792139 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-01-10 14:08:52.792188 | orchestrator | ok: [testbed-manager] 2026-01-10 14:08:52.792201 | orchestrator | 2026-01-10 14:08:52.792214 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-01-10 14:08:52.792226 | orchestrator | Saturday 10 January 2026 14:07:35 +0000 (0:00:38.195) 0:00:43.438 ****** 2026-01-10 14:08:52.792239 | orchestrator | changed: [testbed-manager] 2026-01-10 14:08:52.792251 | orchestrator | 2026-01-10 14:08:52.792264 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-01-10 14:08:52.792276 | orchestrator | Saturday 10 January 2026 14:07:51 +0000 (0:00:15.724) 0:00:59.163 ****** 2026-01-10 14:08:52.792289 | orchestrator | Pausing for 60 seconds 2026-01-10 14:08:52.792303 | orchestrator | changed: [testbed-manager] 2026-01-10 14:08:52.792316 | orchestrator | 2026-01-10 14:08:52.792328 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-01-10 14:08:52.792341 | orchestrator | Saturday 10 January 2026 14:08:51 +0000 (0:01:00.075) 0:01:59.239 ****** 2026-01-10 14:08:52.792354 | orchestrator | ok: [testbed-manager] 2026-01-10 14:08:52.792367 | orchestrator | 2026-01-10 14:08:52.792380 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-01-10 14:08:52.792393 | orchestrator | Saturday 10 January 2026 14:08:51 +0000 (0:00:00.071) 0:01:59.310 ****** 2026-01-10 14:08:52.792405 | orchestrator | changed: [testbed-manager] 2026-01-10 14:08:52.792415 | orchestrator | 2026-01-10 14:08:52.792426 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:08:52.792437 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:08:52.792448 | orchestrator | 2026-01-10 14:08:52.792458 | orchestrator | 2026-01-10 14:08:52.792469 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:08:52.792480 | orchestrator | Saturday 10 January 2026 14:08:52 +0000 (0:00:00.665) 0:01:59.976 ****** 2026-01-10 14:08:52.792490 | orchestrator | =============================================================================== 2026-01-10 14:08:52.792501 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2026-01-10 14:08:52.792512 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 38.20s 2026-01-10 14:08:52.792522 | orchestrator | osism.services.squid : Restart squid service --------------------------- 15.72s 2026-01-10 14:08:52.792577 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.50s 2026-01-10 14:08:52.792590 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.17s 2026-01-10 14:08:52.792601 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.04s 2026-01-10 14:08:52.792612 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.92s 2026-01-10 14:08:52.792622 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.67s 2026-01-10 14:08:52.792633 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.34s 2026-01-10 14:08:52.792644 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2026-01-10 14:08:52.792655 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-01-10 14:08:53.127756 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-01-10 14:08:53.128158 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-01-10 14:08:53.186634 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-10 14:08:53.186797 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-01-10 14:08:53.194790 | orchestrator | + set -e 2026-01-10 14:08:53.194878 | orchestrator | + NAMESPACE=kolla/release 2026-01-10 14:08:53.194897 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-01-10 14:08:53.199671 | orchestrator | ++ semver 9.5.0 9.0.0 2026-01-10 14:08:53.273005 | orchestrator | + [[ 1 -lt 0 ]] 2026-01-10 14:08:53.273964 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-01-10 14:09:05.373849 | orchestrator | 2026-01-10 14:09:05 | INFO  | Task 9c2560b7-b0e7-45a9-ab9b-16cf73a18c37 (operator) was prepared for execution. 2026-01-10 14:09:05.374097 | orchestrator | 2026-01-10 14:09:05 | INFO  | It takes a moment until task 9c2560b7-b0e7-45a9-ab9b-16cf73a18c37 (operator) has been started and output is visible here. 2026-01-10 14:09:21.562924 | orchestrator | 2026-01-10 14:09:21.563044 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-01-10 14:09:21.563061 | orchestrator | 2026-01-10 14:09:21.563074 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-10 14:09:21.563085 | orchestrator | Saturday 10 January 2026 14:09:09 +0000 (0:00:00.147) 0:00:00.147 ****** 2026-01-10 14:09:21.563097 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:09:21.563109 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:09:21.563120 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:09:21.563131 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:09:21.563142 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:09:21.563153 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:09:21.563164 | orchestrator | 2026-01-10 14:09:21.563175 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-01-10 14:09:21.563186 | orchestrator | Saturday 10 January 2026 14:09:12 +0000 (0:00:03.303) 0:00:03.450 ****** 2026-01-10 14:09:21.563197 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:09:21.563208 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:09:21.563218 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:09:21.563229 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:09:21.563240 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:09:21.563251 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:09:21.563262 | orchestrator | 2026-01-10 14:09:21.563273 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-01-10 14:09:21.563284 | orchestrator | 2026-01-10 14:09:21.563298 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-10 14:09:21.563317 | orchestrator | Saturday 10 January 2026 14:09:13 +0000 (0:00:00.776) 0:00:04.227 ****** 2026-01-10 14:09:21.563334 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:09:21.563353 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:09:21.563374 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:09:21.563415 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:09:21.563428 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:09:21.563439 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:09:21.563451 | orchestrator | 2026-01-10 14:09:21.563464 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-10 14:09:21.563477 | orchestrator | Saturday 10 January 2026 14:09:13 +0000 (0:00:00.181) 0:00:04.408 ****** 2026-01-10 14:09:21.563489 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:09:21.563502 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:09:21.563514 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:09:21.563527 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:09:21.563585 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:09:21.563598 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:09:21.563610 | orchestrator | 2026-01-10 14:09:21.563623 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-10 14:09:21.563636 | orchestrator | Saturday 10 January 2026 14:09:13 +0000 (0:00:00.168) 0:00:04.577 ****** 2026-01-10 14:09:21.563649 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:09:21.563663 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:09:21.563675 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:09:21.563688 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:09:21.563700 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:09:21.563713 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:09:21.563725 | orchestrator | 2026-01-10 14:09:21.563738 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-10 14:09:21.563750 | orchestrator | Saturday 10 January 2026 14:09:14 +0000 (0:00:00.616) 0:00:05.193 ****** 2026-01-10 14:09:21.563763 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:09:21.563776 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:09:21.563788 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:09:21.563824 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:09:21.563836 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:09:21.563847 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:09:21.563858 | orchestrator | 2026-01-10 14:09:21.563869 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-10 14:09:21.563880 | orchestrator | Saturday 10 January 2026 14:09:15 +0000 (0:00:00.887) 0:00:06.081 ****** 2026-01-10 14:09:21.563892 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-01-10 14:09:21.563903 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-01-10 14:09:21.563914 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-01-10 14:09:21.563925 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-01-10 14:09:21.563936 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-01-10 14:09:21.563947 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-01-10 14:09:21.563958 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-01-10 14:09:21.563968 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-01-10 14:09:21.563979 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-01-10 14:09:21.563990 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-01-10 14:09:21.564001 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-01-10 14:09:21.564012 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-01-10 14:09:21.564023 | orchestrator | 2026-01-10 14:09:21.564033 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-10 14:09:21.564044 | orchestrator | Saturday 10 January 2026 14:09:16 +0000 (0:00:01.318) 0:00:07.400 ****** 2026-01-10 14:09:21.564055 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:09:21.564067 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:09:21.564077 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:09:21.564089 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:09:21.564100 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:09:21.564111 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:09:21.564121 | orchestrator | 2026-01-10 14:09:21.564132 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-10 14:09:21.564144 | orchestrator | Saturday 10 January 2026 14:09:18 +0000 (0:00:01.279) 0:00:08.679 ****** 2026-01-10 14:09:21.564155 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-01-10 14:09:21.564166 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-01-10 14:09:21.564177 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-01-10 14:09:21.564189 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-01-10 14:09:21.564217 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-01-10 14:09:21.564229 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-01-10 14:09:21.564240 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-01-10 14:09:21.564251 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-01-10 14:09:21.564262 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-01-10 14:09:21.564272 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-01-10 14:09:21.564283 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-01-10 14:09:21.564294 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-01-10 14:09:21.564305 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-01-10 14:09:21.564316 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-01-10 14:09:21.564326 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-01-10 14:09:21.564337 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-01-10 14:09:21.564348 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-01-10 14:09:21.564359 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-01-10 14:09:21.564378 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-01-10 14:09:21.564389 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-01-10 14:09:21.564400 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-01-10 14:09:21.564411 | orchestrator | 2026-01-10 14:09:21.564422 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-10 14:09:21.564434 | orchestrator | Saturday 10 January 2026 14:09:19 +0000 (0:00:01.297) 0:00:09.977 ****** 2026-01-10 14:09:21.564445 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:09:21.564456 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:09:21.564467 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:09:21.564478 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:09:21.564489 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:09:21.564500 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:09:21.564511 | orchestrator | 2026-01-10 14:09:21.564522 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-10 14:09:21.564614 | orchestrator | Saturday 10 January 2026 14:09:19 +0000 (0:00:00.157) 0:00:10.134 ****** 2026-01-10 14:09:21.564629 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:09:21.564640 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:09:21.564651 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:09:21.564662 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:09:21.564673 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:09:21.564682 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:09:21.564692 | orchestrator | 2026-01-10 14:09:21.564703 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-10 14:09:21.564712 | orchestrator | Saturday 10 January 2026 14:09:19 +0000 (0:00:00.178) 0:00:10.313 ****** 2026-01-10 14:09:21.564722 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:09:21.564732 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:09:21.564741 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:09:21.564751 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:09:21.564760 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:09:21.564770 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:09:21.564779 | orchestrator | 2026-01-10 14:09:21.564789 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-10 14:09:21.564799 | orchestrator | Saturday 10 January 2026 14:09:20 +0000 (0:00:00.555) 0:00:10.868 ****** 2026-01-10 14:09:21.564808 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:09:21.564818 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:09:21.564828 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:09:21.564837 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:09:21.564847 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:09:21.564857 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:09:21.564866 | orchestrator | 2026-01-10 14:09:21.564876 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-10 14:09:21.564886 | orchestrator | Saturday 10 January 2026 14:09:20 +0000 (0:00:00.200) 0:00:11.069 ****** 2026-01-10 14:09:21.564895 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-10 14:09:21.564914 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:09:21.564924 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-10 14:09:21.564934 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:09:21.564944 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-10 14:09:21.564954 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:09:21.564963 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-10 14:09:21.564973 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:09:21.564983 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-10 14:09:21.564993 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:09:21.565002 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-10 14:09:21.565012 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:09:21.565029 | orchestrator | 2026-01-10 14:09:21.565039 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-10 14:09:21.565049 | orchestrator | Saturday 10 January 2026 14:09:21 +0000 (0:00:00.817) 0:00:11.887 ****** 2026-01-10 14:09:21.565058 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:09:21.565068 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:09:21.565077 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:09:21.565087 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:09:21.565097 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:09:21.565107 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:09:21.565116 | orchestrator | 2026-01-10 14:09:21.565126 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-10 14:09:21.565136 | orchestrator | Saturday 10 January 2026 14:09:21 +0000 (0:00:00.157) 0:00:12.044 ****** 2026-01-10 14:09:21.565146 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:09:21.565155 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:09:21.565165 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:09:21.565174 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:09:21.565191 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:09:22.998877 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:09:22.999302 | orchestrator | 2026-01-10 14:09:22.999322 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-10 14:09:22.999332 | orchestrator | Saturday 10 January 2026 14:09:21 +0000 (0:00:00.165) 0:00:12.210 ****** 2026-01-10 14:09:22.999341 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:09:22.999351 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:09:22.999360 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:09:22.999365 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:09:22.999387 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:09:22.999393 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:09:22.999397 | orchestrator | 2026-01-10 14:09:22.999409 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-10 14:09:22.999415 | orchestrator | Saturday 10 January 2026 14:09:21 +0000 (0:00:00.165) 0:00:12.375 ****** 2026-01-10 14:09:22.999420 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:09:22.999425 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:09:22.999430 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:09:22.999435 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:09:22.999440 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:09:22.999445 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:09:22.999450 | orchestrator | 2026-01-10 14:09:22.999455 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-10 14:09:22.999461 | orchestrator | Saturday 10 January 2026 14:09:22 +0000 (0:00:00.753) 0:00:13.129 ****** 2026-01-10 14:09:22.999466 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:09:22.999470 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:09:22.999475 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:09:22.999496 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:09:22.999501 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:09:22.999506 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:09:22.999511 | orchestrator | 2026-01-10 14:09:22.999516 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:09:22.999522 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:09:22.999528 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:09:22.999550 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:09:22.999558 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:09:22.999578 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:09:22.999583 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:09:22.999588 | orchestrator | 2026-01-10 14:09:22.999593 | orchestrator | 2026-01-10 14:09:22.999598 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:09:22.999602 | orchestrator | Saturday 10 January 2026 14:09:22 +0000 (0:00:00.250) 0:00:13.380 ****** 2026-01-10 14:09:22.999607 | orchestrator | =============================================================================== 2026-01-10 14:09:22.999612 | orchestrator | Gathering Facts --------------------------------------------------------- 3.30s 2026-01-10 14:09:22.999617 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.32s 2026-01-10 14:09:22.999623 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.30s 2026-01-10 14:09:22.999628 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.28s 2026-01-10 14:09:22.999633 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.89s 2026-01-10 14:09:22.999638 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.82s 2026-01-10 14:09:22.999643 | orchestrator | Do not require tty for all users ---------------------------------------- 0.78s 2026-01-10 14:09:22.999648 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.75s 2026-01-10 14:09:22.999652 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.62s 2026-01-10 14:09:22.999657 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.56s 2026-01-10 14:09:22.999662 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.25s 2026-01-10 14:09:22.999667 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.20s 2026-01-10 14:09:22.999671 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.18s 2026-01-10 14:09:22.999676 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.18s 2026-01-10 14:09:22.999681 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.17s 2026-01-10 14:09:22.999686 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.17s 2026-01-10 14:09:22.999691 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.17s 2026-01-10 14:09:22.999695 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.16s 2026-01-10 14:09:22.999700 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.16s 2026-01-10 14:09:23.332189 | orchestrator | + osism apply --environment custom facts 2026-01-10 14:09:25.232913 | orchestrator | 2026-01-10 14:09:25 | INFO  | Trying to run play facts in environment custom 2026-01-10 14:09:35.365070 | orchestrator | 2026-01-10 14:09:35 | INFO  | Task d03cb396-3b97-46bd-b3d4-ff5bbcde7fca (facts) was prepared for execution. 2026-01-10 14:09:35.365205 | orchestrator | 2026-01-10 14:09:35 | INFO  | It takes a moment until task d03cb396-3b97-46bd-b3d4-ff5bbcde7fca (facts) has been started and output is visible here. 2026-01-10 14:10:21.233596 | orchestrator | 2026-01-10 14:10:21.233737 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-01-10 14:10:21.233754 | orchestrator | 2026-01-10 14:10:21.233766 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-10 14:10:21.233778 | orchestrator | Saturday 10 January 2026 14:09:39 +0000 (0:00:00.111) 0:00:00.111 ****** 2026-01-10 14:10:21.233790 | orchestrator | ok: [testbed-manager] 2026-01-10 14:10:21.233802 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:10:21.233814 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:10:21.233855 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:10:21.233866 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:10:21.233877 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:10:21.233888 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:10:21.233899 | orchestrator | 2026-01-10 14:10:21.233910 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-01-10 14:10:21.233921 | orchestrator | Saturday 10 January 2026 14:09:40 +0000 (0:00:01.359) 0:00:01.470 ****** 2026-01-10 14:10:21.233932 | orchestrator | ok: [testbed-manager] 2026-01-10 14:10:21.233943 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:10:21.233963 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:10:21.233990 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:10:21.234011 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:10:21.234103 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:10:21.234123 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:10:21.234142 | orchestrator | 2026-01-10 14:10:21.234161 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-01-10 14:10:21.234180 | orchestrator | 2026-01-10 14:10:21.234200 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-10 14:10:21.234220 | orchestrator | Saturday 10 January 2026 14:09:42 +0000 (0:00:01.191) 0:00:02.661 ****** 2026-01-10 14:10:21.234239 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:10:21.234258 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:10:21.234277 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:10:21.234296 | orchestrator | 2026-01-10 14:10:21.234315 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-10 14:10:21.234337 | orchestrator | Saturday 10 January 2026 14:09:42 +0000 (0:00:00.130) 0:00:02.791 ****** 2026-01-10 14:10:21.234355 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:10:21.234374 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:10:21.234393 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:10:21.234410 | orchestrator | 2026-01-10 14:10:21.234431 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-10 14:10:21.234451 | orchestrator | Saturday 10 January 2026 14:09:42 +0000 (0:00:00.223) 0:00:03.015 ****** 2026-01-10 14:10:21.234470 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:10:21.234490 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:10:21.234511 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:10:21.234530 | orchestrator | 2026-01-10 14:10:21.234576 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-10 14:10:21.234594 | orchestrator | Saturday 10 January 2026 14:09:42 +0000 (0:00:00.225) 0:00:03.241 ****** 2026-01-10 14:10:21.234615 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:10:21.234635 | orchestrator | 2026-01-10 14:10:21.234653 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-10 14:10:21.234671 | orchestrator | Saturday 10 January 2026 14:09:42 +0000 (0:00:00.152) 0:00:03.393 ****** 2026-01-10 14:10:21.234687 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:10:21.234705 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:10:21.234723 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:10:21.234740 | orchestrator | 2026-01-10 14:10:21.234757 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-10 14:10:21.234774 | orchestrator | Saturday 10 January 2026 14:09:43 +0000 (0:00:00.505) 0:00:03.899 ****** 2026-01-10 14:10:21.234792 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:10:21.234810 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:10:21.234828 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:10:21.234846 | orchestrator | 2026-01-10 14:10:21.234865 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-10 14:10:21.234884 | orchestrator | Saturday 10 January 2026 14:09:43 +0000 (0:00:00.147) 0:00:04.046 ****** 2026-01-10 14:10:21.234903 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:10:21.234939 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:10:21.234989 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:10:21.235008 | orchestrator | 2026-01-10 14:10:21.235026 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-10 14:10:21.235044 | orchestrator | Saturday 10 January 2026 14:09:44 +0000 (0:00:01.349) 0:00:05.396 ****** 2026-01-10 14:10:21.235062 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:10:21.235080 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:10:21.235098 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:10:21.235117 | orchestrator | 2026-01-10 14:10:21.235135 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-10 14:10:21.235154 | orchestrator | Saturday 10 January 2026 14:09:45 +0000 (0:00:00.533) 0:00:05.930 ****** 2026-01-10 14:10:21.235172 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:10:21.235190 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:10:21.235207 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:10:21.235225 | orchestrator | 2026-01-10 14:10:21.235244 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-10 14:10:21.235353 | orchestrator | Saturday 10 January 2026 14:09:46 +0000 (0:00:01.146) 0:00:07.076 ****** 2026-01-10 14:10:21.235368 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:10:21.235379 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:10:21.235390 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:10:21.235400 | orchestrator | 2026-01-10 14:10:21.235411 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-01-10 14:10:21.235422 | orchestrator | Saturday 10 January 2026 14:10:04 +0000 (0:00:17.557) 0:00:24.634 ****** 2026-01-10 14:10:21.235433 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:10:21.235443 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:10:21.235454 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:10:21.235465 | orchestrator | 2026-01-10 14:10:21.235475 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-01-10 14:10:21.235513 | orchestrator | Saturday 10 January 2026 14:10:04 +0000 (0:00:00.115) 0:00:24.750 ****** 2026-01-10 14:10:21.235524 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:10:21.235554 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:10:21.235566 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:10:21.235576 | orchestrator | 2026-01-10 14:10:21.235587 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-10 14:10:21.235598 | orchestrator | Saturday 10 January 2026 14:10:11 +0000 (0:00:07.749) 0:00:32.500 ****** 2026-01-10 14:10:21.235609 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:10:21.235620 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:10:21.235630 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:10:21.235641 | orchestrator | 2026-01-10 14:10:21.235652 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-10 14:10:21.235663 | orchestrator | Saturday 10 January 2026 14:10:12 +0000 (0:00:00.546) 0:00:33.046 ****** 2026-01-10 14:10:21.235673 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-01-10 14:10:21.235690 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-01-10 14:10:21.235701 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-01-10 14:10:21.235713 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-01-10 14:10:21.235731 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-01-10 14:10:21.235751 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-01-10 14:10:21.235768 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-01-10 14:10:21.235787 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-01-10 14:10:21.235804 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-01-10 14:10:21.235822 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-01-10 14:10:21.235839 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-01-10 14:10:21.235871 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-01-10 14:10:21.235888 | orchestrator | 2026-01-10 14:10:21.235907 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-10 14:10:21.235924 | orchestrator | Saturday 10 January 2026 14:10:16 +0000 (0:00:03.609) 0:00:36.656 ****** 2026-01-10 14:10:21.235942 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:10:21.235960 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:10:21.235979 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:10:21.235998 | orchestrator | 2026-01-10 14:10:21.236016 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-10 14:10:21.236033 | orchestrator | 2026-01-10 14:10:21.236052 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-10 14:10:21.236070 | orchestrator | Saturday 10 January 2026 14:10:17 +0000 (0:00:01.373) 0:00:38.029 ****** 2026-01-10 14:10:21.236090 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:10:21.236108 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:10:21.236125 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:10:21.236144 | orchestrator | ok: [testbed-manager] 2026-01-10 14:10:21.236163 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:10:21.236182 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:10:21.236200 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:10:21.236218 | orchestrator | 2026-01-10 14:10:21.236236 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:10:21.236256 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:10:21.236276 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:10:21.236296 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:10:21.236314 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:10:21.236333 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:10:21.236351 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:10:21.236367 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:10:21.236385 | orchestrator | 2026-01-10 14:10:21.236403 | orchestrator | 2026-01-10 14:10:21.236421 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:10:21.236440 | orchestrator | Saturday 10 January 2026 14:10:21 +0000 (0:00:03.747) 0:00:41.777 ****** 2026-01-10 14:10:21.236458 | orchestrator | =============================================================================== 2026-01-10 14:10:21.236475 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.56s 2026-01-10 14:10:21.236493 | orchestrator | Install required packages (Debian) -------------------------------------- 7.75s 2026-01-10 14:10:21.236511 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.75s 2026-01-10 14:10:21.236528 | orchestrator | Copy fact files --------------------------------------------------------- 3.61s 2026-01-10 14:10:21.236606 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.37s 2026-01-10 14:10:21.236624 | orchestrator | Create custom facts directory ------------------------------------------- 1.36s 2026-01-10 14:10:21.236659 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.35s 2026-01-10 14:10:21.487987 | orchestrator | Copy fact file ---------------------------------------------------------- 1.19s 2026-01-10 14:10:21.488119 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.15s 2026-01-10 14:10:21.488168 | orchestrator | Create custom facts directory ------------------------------------------- 0.55s 2026-01-10 14:10:21.488180 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.53s 2026-01-10 14:10:21.488191 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.51s 2026-01-10 14:10:21.488202 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.23s 2026-01-10 14:10:21.488213 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.22s 2026-01-10 14:10:21.488242 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2026-01-10 14:10:21.488254 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.15s 2026-01-10 14:10:21.488265 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.13s 2026-01-10 14:10:21.488276 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.12s 2026-01-10 14:10:21.798987 | orchestrator | + osism apply bootstrap 2026-01-10 14:10:33.899395 | orchestrator | 2026-01-10 14:10:33 | INFO  | Task a2d827f2-a419-46a4-936b-81d616b97c57 (bootstrap) was prepared for execution. 2026-01-10 14:10:33.899530 | orchestrator | 2026-01-10 14:10:33 | INFO  | It takes a moment until task a2d827f2-a419-46a4-936b-81d616b97c57 (bootstrap) has been started and output is visible here. 2026-01-10 14:10:50.737466 | orchestrator | 2026-01-10 14:10:50.737647 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-01-10 14:10:50.737667 | orchestrator | 2026-01-10 14:10:50.737680 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-01-10 14:10:50.737691 | orchestrator | Saturday 10 January 2026 14:10:38 +0000 (0:00:00.169) 0:00:00.169 ****** 2026-01-10 14:10:50.737702 | orchestrator | ok: [testbed-manager] 2026-01-10 14:10:50.737715 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:10:50.737726 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:10:50.737737 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:10:50.737748 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:10:50.737759 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:10:50.737778 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:10:50.737795 | orchestrator | 2026-01-10 14:10:50.737806 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-10 14:10:50.737817 | orchestrator | 2026-01-10 14:10:50.737828 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-10 14:10:50.737839 | orchestrator | Saturday 10 January 2026 14:10:38 +0000 (0:00:00.270) 0:00:00.440 ****** 2026-01-10 14:10:50.737850 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:10:50.737861 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:10:50.737872 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:10:50.737883 | orchestrator | ok: [testbed-manager] 2026-01-10 14:10:50.737894 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:10:50.737905 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:10:50.737923 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:10:50.737935 | orchestrator | 2026-01-10 14:10:50.737946 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-01-10 14:10:50.737957 | orchestrator | 2026-01-10 14:10:50.737967 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-10 14:10:50.737978 | orchestrator | Saturday 10 January 2026 14:10:42 +0000 (0:00:03.973) 0:00:04.414 ****** 2026-01-10 14:10:50.737990 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-10 14:10:50.738001 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-10 14:10:50.738012 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-01-10 14:10:50.738095 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-10 14:10:50.738107 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:10:50.738118 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-10 14:10:50.738157 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:10:50.738169 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-01-10 14:10:50.738179 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:10:50.738190 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-10 14:10:50.738201 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-01-10 14:10:50.738212 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-10 14:10:50.738223 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-10 14:10:50.738234 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-10 14:10:50.738245 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-01-10 14:10:50.738256 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-01-10 14:10:50.738267 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:10:50.738278 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-10 14:10:50.738288 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-01-10 14:10:50.738299 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-01-10 14:10:50.738310 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-10 14:10:50.738320 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-10 14:10:50.738331 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:10:50.738342 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-10 14:10:50.738352 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-01-10 14:10:50.738363 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-01-10 14:10:50.738374 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-01-10 14:10:50.738385 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-10 14:10:50.738395 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:10:50.738406 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-01-10 14:10:50.738416 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-01-10 14:10:50.738427 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-01-10 14:10:50.738437 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-01-10 14:10:50.738448 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-10 14:10:50.738459 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-01-10 14:10:50.738469 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-01-10 14:10:50.738480 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-01-10 14:10:50.738491 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-01-10 14:10:50.738501 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-10 14:10:50.738512 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-10 14:10:50.738523 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-10 14:10:50.738553 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-10 14:10:50.738564 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-01-10 14:10:50.738575 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:10:50.738585 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-10 14:10:50.738596 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-10 14:10:50.738627 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-01-10 14:10:50.738638 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-10 14:10:50.738649 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-10 14:10:50.738660 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:10:50.738670 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:10:50.738681 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-01-10 14:10:50.738700 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-10 14:10:50.738711 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-10 14:10:50.738722 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-10 14:10:50.738733 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:10:50.738743 | orchestrator | 2026-01-10 14:10:50.738772 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-01-10 14:10:50.738783 | orchestrator | 2026-01-10 14:10:50.738794 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-01-10 14:10:50.738805 | orchestrator | Saturday 10 January 2026 14:10:42 +0000 (0:00:00.493) 0:00:04.908 ****** 2026-01-10 14:10:50.738816 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:10:50.738827 | orchestrator | ok: [testbed-manager] 2026-01-10 14:10:50.738837 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:10:50.738848 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:10:50.738858 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:10:50.738869 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:10:50.738879 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:10:50.738890 | orchestrator | 2026-01-10 14:10:50.738901 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-01-10 14:10:50.738912 | orchestrator | Saturday 10 January 2026 14:10:44 +0000 (0:00:01.278) 0:00:06.186 ****** 2026-01-10 14:10:50.738922 | orchestrator | ok: [testbed-manager] 2026-01-10 14:10:50.738933 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:10:50.738944 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:10:50.738954 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:10:50.738965 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:10:50.738975 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:10:50.738986 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:10:50.738996 | orchestrator | 2026-01-10 14:10:50.739007 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-01-10 14:10:50.739018 | orchestrator | Saturday 10 January 2026 14:10:45 +0000 (0:00:01.229) 0:00:07.416 ****** 2026-01-10 14:10:50.739030 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:10:50.739044 | orchestrator | 2026-01-10 14:10:50.739055 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-01-10 14:10:50.739066 | orchestrator | Saturday 10 January 2026 14:10:45 +0000 (0:00:00.277) 0:00:07.694 ****** 2026-01-10 14:10:50.739076 | orchestrator | changed: [testbed-manager] 2026-01-10 14:10:50.739087 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:10:50.739098 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:10:50.739108 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:10:50.739119 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:10:50.739130 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:10:50.739140 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:10:50.739151 | orchestrator | 2026-01-10 14:10:50.739162 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-01-10 14:10:50.739172 | orchestrator | Saturday 10 January 2026 14:10:47 +0000 (0:00:02.226) 0:00:09.920 ****** 2026-01-10 14:10:50.739183 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:10:50.739195 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:10:50.739208 | orchestrator | 2026-01-10 14:10:50.739219 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-01-10 14:10:50.739229 | orchestrator | Saturday 10 January 2026 14:10:48 +0000 (0:00:00.276) 0:00:10.196 ****** 2026-01-10 14:10:50.739240 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:10:50.739251 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:10:50.739262 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:10:50.739279 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:10:50.739290 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:10:50.739300 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:10:50.739311 | orchestrator | 2026-01-10 14:10:50.739322 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-01-10 14:10:50.739332 | orchestrator | Saturday 10 January 2026 14:10:49 +0000 (0:00:01.173) 0:00:11.370 ****** 2026-01-10 14:10:50.739343 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:10:50.739354 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:10:50.739364 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:10:50.739375 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:10:50.739385 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:10:50.739396 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:10:50.739407 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:10:50.739417 | orchestrator | 2026-01-10 14:10:50.739433 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-01-10 14:10:50.739444 | orchestrator | Saturday 10 January 2026 14:10:50 +0000 (0:00:00.730) 0:00:12.100 ****** 2026-01-10 14:10:50.739455 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:10:50.739475 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:10:50.739487 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:10:50.739498 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:10:50.739508 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:10:50.739519 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:10:50.739529 | orchestrator | ok: [testbed-manager] 2026-01-10 14:10:50.739562 | orchestrator | 2026-01-10 14:10:50.739573 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-10 14:10:50.739585 | orchestrator | Saturday 10 January 2026 14:10:50 +0000 (0:00:00.446) 0:00:12.547 ****** 2026-01-10 14:10:50.739596 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:10:50.739606 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:10:50.739624 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:11:03.705012 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:11:03.705170 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:11:03.705187 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:11:03.705199 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:11:03.705210 | orchestrator | 2026-01-10 14:11:03.705224 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-10 14:11:03.705238 | orchestrator | Saturday 10 January 2026 14:10:50 +0000 (0:00:00.218) 0:00:12.765 ****** 2026-01-10 14:11:03.705252 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:11:03.705283 | orchestrator | 2026-01-10 14:11:03.705295 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-10 14:11:03.705307 | orchestrator | Saturday 10 January 2026 14:10:51 +0000 (0:00:00.303) 0:00:13.068 ****** 2026-01-10 14:11:03.705319 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:11:03.705330 | orchestrator | 2026-01-10 14:11:03.705341 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-10 14:11:03.705352 | orchestrator | Saturday 10 January 2026 14:10:51 +0000 (0:00:00.314) 0:00:13.383 ****** 2026-01-10 14:11:03.705363 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:11:03.705376 | orchestrator | ok: [testbed-manager] 2026-01-10 14:11:03.705387 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:11:03.705398 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:11:03.705409 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:11:03.705420 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:11:03.705430 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:11:03.705468 | orchestrator | 2026-01-10 14:11:03.705483 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-10 14:11:03.705496 | orchestrator | Saturday 10 January 2026 14:10:53 +0000 (0:00:01.606) 0:00:14.990 ****** 2026-01-10 14:11:03.705509 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:11:03.705521 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:11:03.705586 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:11:03.705599 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:11:03.705611 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:11:03.705624 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:11:03.705636 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:11:03.705648 | orchestrator | 2026-01-10 14:11:03.705661 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-10 14:11:03.705673 | orchestrator | Saturday 10 January 2026 14:10:53 +0000 (0:00:00.235) 0:00:15.226 ****** 2026-01-10 14:11:03.705686 | orchestrator | ok: [testbed-manager] 2026-01-10 14:11:03.705699 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:11:03.705711 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:11:03.705723 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:11:03.705736 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:11:03.705749 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:11:03.705761 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:11:03.705773 | orchestrator | 2026-01-10 14:11:03.705786 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-10 14:11:03.705799 | orchestrator | Saturday 10 January 2026 14:10:53 +0000 (0:00:00.624) 0:00:15.850 ****** 2026-01-10 14:11:03.705812 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:11:03.705824 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:11:03.705835 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:11:03.705847 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:11:03.705857 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:11:03.705868 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:11:03.705879 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:11:03.705890 | orchestrator | 2026-01-10 14:11:03.705901 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-10 14:11:03.705913 | orchestrator | Saturday 10 January 2026 14:10:54 +0000 (0:00:00.368) 0:00:16.219 ****** 2026-01-10 14:11:03.705924 | orchestrator | ok: [testbed-manager] 2026-01-10 14:11:03.705935 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:11:03.705945 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:11:03.705956 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:11:03.705967 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:11:03.705977 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:11:03.705988 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:11:03.705999 | orchestrator | 2026-01-10 14:11:03.706009 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-10 14:11:03.706076 | orchestrator | Saturday 10 January 2026 14:10:54 +0000 (0:00:00.606) 0:00:16.826 ****** 2026-01-10 14:11:03.706089 | orchestrator | ok: [testbed-manager] 2026-01-10 14:11:03.706099 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:11:03.706110 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:11:03.706121 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:11:03.706132 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:11:03.706143 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:11:03.706205 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:11:03.706217 | orchestrator | 2026-01-10 14:11:03.706228 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-10 14:11:03.706239 | orchestrator | Saturday 10 January 2026 14:10:56 +0000 (0:00:01.236) 0:00:18.062 ****** 2026-01-10 14:11:03.706250 | orchestrator | ok: [testbed-manager] 2026-01-10 14:11:03.706260 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:11:03.706271 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:11:03.706282 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:11:03.706292 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:11:03.706313 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:11:03.706324 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:11:03.706335 | orchestrator | 2026-01-10 14:11:03.706346 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-10 14:11:03.706357 | orchestrator | Saturday 10 January 2026 14:10:57 +0000 (0:00:01.119) 0:00:19.182 ****** 2026-01-10 14:11:03.706392 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:11:03.706404 | orchestrator | 2026-01-10 14:11:03.706415 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-10 14:11:03.706426 | orchestrator | Saturday 10 January 2026 14:10:57 +0000 (0:00:00.318) 0:00:19.500 ****** 2026-01-10 14:11:03.706437 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:11:03.706447 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:11:03.706458 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:11:03.706468 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:11:03.706479 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:11:03.706490 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:11:03.706500 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:11:03.706511 | orchestrator | 2026-01-10 14:11:03.706521 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-10 14:11:03.706559 | orchestrator | Saturday 10 January 2026 14:10:58 +0000 (0:00:01.407) 0:00:20.908 ****** 2026-01-10 14:11:03.706579 | orchestrator | ok: [testbed-manager] 2026-01-10 14:11:03.706597 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:11:03.706615 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:11:03.706632 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:11:03.706650 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:11:03.706668 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:11:03.706686 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:11:03.706703 | orchestrator | 2026-01-10 14:11:03.706721 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-10 14:11:03.706738 | orchestrator | Saturday 10 January 2026 14:10:59 +0000 (0:00:00.237) 0:00:21.146 ****** 2026-01-10 14:11:03.706757 | orchestrator | ok: [testbed-manager] 2026-01-10 14:11:03.706774 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:11:03.706792 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:11:03.706811 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:11:03.706850 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:11:03.706863 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:11:03.706873 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:11:03.706884 | orchestrator | 2026-01-10 14:11:03.706895 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-10 14:11:03.706907 | orchestrator | Saturday 10 January 2026 14:10:59 +0000 (0:00:00.253) 0:00:21.399 ****** 2026-01-10 14:11:03.706926 | orchestrator | ok: [testbed-manager] 2026-01-10 14:11:03.706944 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:11:03.706961 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:11:03.706978 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:11:03.706997 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:11:03.707016 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:11:03.707034 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:11:03.707053 | orchestrator | 2026-01-10 14:11:03.707065 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-10 14:11:03.707076 | orchestrator | Saturday 10 January 2026 14:10:59 +0000 (0:00:00.251) 0:00:21.650 ****** 2026-01-10 14:11:03.707088 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:11:03.707101 | orchestrator | 2026-01-10 14:11:03.707112 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-10 14:11:03.707134 | orchestrator | Saturday 10 January 2026 14:10:59 +0000 (0:00:00.292) 0:00:21.943 ****** 2026-01-10 14:11:03.707145 | orchestrator | ok: [testbed-manager] 2026-01-10 14:11:03.707156 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:11:03.707167 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:11:03.707178 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:11:03.707188 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:11:03.707199 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:11:03.707209 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:11:03.707220 | orchestrator | 2026-01-10 14:11:03.707231 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-10 14:11:03.707242 | orchestrator | Saturday 10 January 2026 14:11:00 +0000 (0:00:00.649) 0:00:22.593 ****** 2026-01-10 14:11:03.707252 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:11:03.707263 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:11:03.707274 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:11:03.707285 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:11:03.707295 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:11:03.707306 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:11:03.707317 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:11:03.707327 | orchestrator | 2026-01-10 14:11:03.707338 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-10 14:11:03.707349 | orchestrator | Saturday 10 January 2026 14:11:00 +0000 (0:00:00.227) 0:00:22.821 ****** 2026-01-10 14:11:03.707359 | orchestrator | ok: [testbed-manager] 2026-01-10 14:11:03.707370 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:11:03.707381 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:11:03.707391 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:11:03.707402 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:11:03.707413 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:11:03.707423 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:11:03.707434 | orchestrator | 2026-01-10 14:11:03.707445 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-10 14:11:03.707456 | orchestrator | Saturday 10 January 2026 14:11:01 +0000 (0:00:01.095) 0:00:23.916 ****** 2026-01-10 14:11:03.707467 | orchestrator | ok: [testbed-manager] 2026-01-10 14:11:03.707477 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:11:03.707488 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:11:03.707499 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:11:03.707510 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:11:03.707520 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:11:03.707553 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:11:03.707573 | orchestrator | 2026-01-10 14:11:03.707593 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-10 14:11:03.707612 | orchestrator | Saturday 10 January 2026 14:11:02 +0000 (0:00:00.598) 0:00:24.514 ****** 2026-01-10 14:11:03.707630 | orchestrator | ok: [testbed-manager] 2026-01-10 14:11:03.707643 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:11:03.707654 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:11:03.707665 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:11:03.707687 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:11:48.216755 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:11:48.216867 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:11:48.216878 | orchestrator | 2026-01-10 14:11:48.216886 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-10 14:11:48.216893 | orchestrator | Saturday 10 January 2026 14:11:03 +0000 (0:00:01.130) 0:00:25.644 ****** 2026-01-10 14:11:48.216899 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:11:48.216907 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:11:48.216913 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:11:48.216919 | orchestrator | changed: [testbed-manager] 2026-01-10 14:11:48.216925 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:11:48.216931 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:11:48.216936 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:11:48.216942 | orchestrator | 2026-01-10 14:11:48.216948 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-01-10 14:11:48.216972 | orchestrator | Saturday 10 January 2026 14:11:21 +0000 (0:00:17.661) 0:00:43.306 ****** 2026-01-10 14:11:48.216979 | orchestrator | ok: [testbed-manager] 2026-01-10 14:11:48.216984 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:11:48.216990 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:11:48.216995 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:11:48.217001 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:11:48.217007 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:11:48.217012 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:11:48.217018 | orchestrator | 2026-01-10 14:11:48.217023 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-01-10 14:11:48.217029 | orchestrator | Saturday 10 January 2026 14:11:21 +0000 (0:00:00.230) 0:00:43.536 ****** 2026-01-10 14:11:48.217035 | orchestrator | ok: [testbed-manager] 2026-01-10 14:11:48.217040 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:11:48.217046 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:11:48.217052 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:11:48.217057 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:11:48.217063 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:11:48.217068 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:11:48.217074 | orchestrator | 2026-01-10 14:11:48.217080 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-01-10 14:11:48.217085 | orchestrator | Saturday 10 January 2026 14:11:21 +0000 (0:00:00.229) 0:00:43.766 ****** 2026-01-10 14:11:48.217091 | orchestrator | ok: [testbed-manager] 2026-01-10 14:11:48.217096 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:11:48.217102 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:11:48.217108 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:11:48.217114 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:11:48.217120 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:11:48.217125 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:11:48.217131 | orchestrator | 2026-01-10 14:11:48.217137 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-01-10 14:11:48.217142 | orchestrator | Saturday 10 January 2026 14:11:22 +0000 (0:00:00.236) 0:00:44.003 ****** 2026-01-10 14:11:48.217149 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:11:48.217158 | orchestrator | 2026-01-10 14:11:48.217164 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-01-10 14:11:48.217170 | orchestrator | Saturday 10 January 2026 14:11:22 +0000 (0:00:00.296) 0:00:44.299 ****** 2026-01-10 14:11:48.217175 | orchestrator | ok: [testbed-manager] 2026-01-10 14:11:48.217181 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:11:48.217187 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:11:48.217192 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:11:48.217198 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:11:48.217203 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:11:48.217209 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:11:48.217215 | orchestrator | 2026-01-10 14:11:48.217220 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-01-10 14:11:48.217226 | orchestrator | Saturday 10 January 2026 14:11:24 +0000 (0:00:01.955) 0:00:46.254 ****** 2026-01-10 14:11:48.217232 | orchestrator | changed: [testbed-manager] 2026-01-10 14:11:48.217237 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:11:48.217243 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:11:48.217249 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:11:48.217254 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:11:48.217260 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:11:48.217266 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:11:48.217271 | orchestrator | 2026-01-10 14:11:48.217277 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-01-10 14:11:48.217283 | orchestrator | Saturday 10 January 2026 14:11:25 +0000 (0:00:01.192) 0:00:47.447 ****** 2026-01-10 14:11:48.217293 | orchestrator | ok: [testbed-manager] 2026-01-10 14:11:48.217299 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:11:48.217306 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:11:48.217312 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:11:48.217318 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:11:48.217325 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:11:48.217331 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:11:48.217338 | orchestrator | 2026-01-10 14:11:48.217344 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-01-10 14:11:48.217354 | orchestrator | Saturday 10 January 2026 14:11:26 +0000 (0:00:00.914) 0:00:48.362 ****** 2026-01-10 14:11:48.217362 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:11:48.217371 | orchestrator | 2026-01-10 14:11:48.217377 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-01-10 14:11:48.217385 | orchestrator | Saturday 10 January 2026 14:11:26 +0000 (0:00:00.376) 0:00:48.738 ****** 2026-01-10 14:11:48.217391 | orchestrator | changed: [testbed-manager] 2026-01-10 14:11:48.217398 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:11:48.217404 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:11:48.217409 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:11:48.217415 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:11:48.217421 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:11:48.217426 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:11:48.217432 | orchestrator | 2026-01-10 14:11:48.217451 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-01-10 14:11:48.217457 | orchestrator | Saturday 10 January 2026 14:11:27 +0000 (0:00:01.171) 0:00:49.909 ****** 2026-01-10 14:11:48.217463 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:11:48.217468 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:11:48.217474 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:11:48.217480 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:11:48.217485 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:11:48.217491 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:11:48.217496 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:11:48.217502 | orchestrator | 2026-01-10 14:11:48.217507 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-01-10 14:11:48.217513 | orchestrator | Saturday 10 January 2026 14:11:28 +0000 (0:00:00.271) 0:00:50.181 ****** 2026-01-10 14:11:48.217519 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:11:48.217541 | orchestrator | 2026-01-10 14:11:48.217548 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-01-10 14:11:48.217553 | orchestrator | Saturday 10 January 2026 14:11:28 +0000 (0:00:00.325) 0:00:50.507 ****** 2026-01-10 14:11:48.217559 | orchestrator | ok: [testbed-manager] 2026-01-10 14:11:48.217565 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:11:48.217570 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:11:48.217576 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:11:48.217581 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:11:48.217587 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:11:48.217593 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:11:48.217598 | orchestrator | 2026-01-10 14:11:48.217604 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-01-10 14:11:48.217610 | orchestrator | Saturday 10 January 2026 14:11:30 +0000 (0:00:02.015) 0:00:52.523 ****** 2026-01-10 14:11:48.217615 | orchestrator | changed: [testbed-manager] 2026-01-10 14:11:48.217621 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:11:48.217627 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:11:48.217632 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:11:48.217644 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:11:48.217650 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:11:48.217656 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:11:48.217661 | orchestrator | 2026-01-10 14:11:48.217667 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-01-10 14:11:48.217673 | orchestrator | Saturday 10 January 2026 14:11:31 +0000 (0:00:01.148) 0:00:53.672 ****** 2026-01-10 14:11:48.217679 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:11:48.217684 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:11:48.217690 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:11:48.217695 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:11:48.217701 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:11:48.217707 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:11:48.217712 | orchestrator | changed: [testbed-manager] 2026-01-10 14:11:48.217718 | orchestrator | 2026-01-10 14:11:48.217724 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-01-10 14:11:48.217729 | orchestrator | Saturday 10 January 2026 14:11:44 +0000 (0:00:12.734) 0:01:06.406 ****** 2026-01-10 14:11:48.217735 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:11:48.217741 | orchestrator | ok: [testbed-manager] 2026-01-10 14:11:48.217746 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:11:48.217752 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:11:48.217757 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:11:48.217763 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:11:48.217769 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:11:48.217774 | orchestrator | 2026-01-10 14:11:48.217780 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-01-10 14:11:48.217786 | orchestrator | Saturday 10 January 2026 14:11:45 +0000 (0:00:01.193) 0:01:07.600 ****** 2026-01-10 14:11:48.217791 | orchestrator | ok: [testbed-manager] 2026-01-10 14:11:48.217797 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:11:48.217803 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:11:48.217808 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:11:48.217814 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:11:48.217819 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:11:48.217825 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:11:48.217830 | orchestrator | 2026-01-10 14:11:48.217836 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-01-10 14:11:48.217842 | orchestrator | Saturday 10 January 2026 14:11:47 +0000 (0:00:01.790) 0:01:09.390 ****** 2026-01-10 14:11:48.217847 | orchestrator | ok: [testbed-manager] 2026-01-10 14:11:48.217853 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:11:48.217859 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:11:48.217864 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:11:48.217870 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:11:48.217875 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:11:48.217881 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:11:48.217887 | orchestrator | 2026-01-10 14:11:48.217893 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-01-10 14:11:48.217898 | orchestrator | Saturday 10 January 2026 14:11:47 +0000 (0:00:00.247) 0:01:09.638 ****** 2026-01-10 14:11:48.217908 | orchestrator | ok: [testbed-manager] 2026-01-10 14:11:48.217914 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:11:48.217919 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:11:48.217925 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:11:48.217930 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:11:48.217936 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:11:48.217942 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:11:48.217947 | orchestrator | 2026-01-10 14:11:48.217953 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-01-10 14:11:48.217959 | orchestrator | Saturday 10 January 2026 14:11:47 +0000 (0:00:00.238) 0:01:09.876 ****** 2026-01-10 14:11:48.217965 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:11:48.217975 | orchestrator | 2026-01-10 14:11:48.217985 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-01-10 14:14:12.431398 | orchestrator | Saturday 10 January 2026 14:11:48 +0000 (0:00:00.277) 0:01:10.154 ****** 2026-01-10 14:14:12.431628 | orchestrator | ok: [testbed-manager] 2026-01-10 14:14:12.431658 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:14:12.431678 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:14:12.431698 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:14:12.431717 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:14:12.431736 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:14:12.431756 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:14:12.431774 | orchestrator | 2026-01-10 14:14:12.431792 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-01-10 14:14:12.431810 | orchestrator | Saturday 10 January 2026 14:11:49 +0000 (0:00:01.787) 0:01:11.941 ****** 2026-01-10 14:14:12.431828 | orchestrator | changed: [testbed-manager] 2026-01-10 14:14:12.431847 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:14:12.431865 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:14:12.431883 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:14:12.431901 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:14:12.431920 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:14:12.431937 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:14:12.431957 | orchestrator | 2026-01-10 14:14:12.431977 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-01-10 14:14:12.431999 | orchestrator | Saturday 10 January 2026 14:11:50 +0000 (0:00:00.577) 0:01:12.519 ****** 2026-01-10 14:14:12.432018 | orchestrator | ok: [testbed-manager] 2026-01-10 14:14:12.432039 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:14:12.432058 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:14:12.432075 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:14:12.432093 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:14:12.432113 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:14:12.432133 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:14:12.432152 | orchestrator | 2026-01-10 14:14:12.432170 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-01-10 14:14:12.432191 | orchestrator | Saturday 10 January 2026 14:11:50 +0000 (0:00:00.242) 0:01:12.761 ****** 2026-01-10 14:14:12.432208 | orchestrator | ok: [testbed-manager] 2026-01-10 14:14:12.432224 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:14:12.432240 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:14:12.432258 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:14:12.432276 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:14:12.432295 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:14:12.432314 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:14:12.432333 | orchestrator | 2026-01-10 14:14:12.432352 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-01-10 14:14:12.432366 | orchestrator | Saturday 10 January 2026 14:11:52 +0000 (0:00:01.342) 0:01:14.103 ****** 2026-01-10 14:14:12.432377 | orchestrator | changed: [testbed-manager] 2026-01-10 14:14:12.432388 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:14:12.432399 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:14:12.432410 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:14:12.432421 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:14:12.432473 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:14:12.432486 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:14:12.432497 | orchestrator | 2026-01-10 14:14:12.432508 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-01-10 14:14:12.432520 | orchestrator | Saturday 10 January 2026 14:11:54 +0000 (0:00:02.140) 0:01:16.244 ****** 2026-01-10 14:14:12.432531 | orchestrator | ok: [testbed-manager] 2026-01-10 14:14:12.432542 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:14:12.432553 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:14:12.432564 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:14:12.432576 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:14:12.432624 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:14:12.432637 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:14:12.432647 | orchestrator | 2026-01-10 14:14:12.432658 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-01-10 14:14:12.432669 | orchestrator | Saturday 10 January 2026 14:11:57 +0000 (0:00:02.861) 0:01:19.105 ****** 2026-01-10 14:14:12.432680 | orchestrator | ok: [testbed-manager] 2026-01-10 14:14:12.432691 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:14:12.432702 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:14:12.432713 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:14:12.432724 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:14:12.432734 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:14:12.432745 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:14:12.432755 | orchestrator | 2026-01-10 14:14:12.432766 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-01-10 14:14:12.432777 | orchestrator | Saturday 10 January 2026 14:12:34 +0000 (0:00:37.631) 0:01:56.737 ****** 2026-01-10 14:14:12.432788 | orchestrator | changed: [testbed-manager] 2026-01-10 14:14:12.432799 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:14:12.432810 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:14:12.432821 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:14:12.432832 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:14:12.432842 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:14:12.432853 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:14:12.432864 | orchestrator | 2026-01-10 14:14:12.432875 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-01-10 14:14:12.432885 | orchestrator | Saturday 10 January 2026 14:13:56 +0000 (0:01:21.402) 0:03:18.140 ****** 2026-01-10 14:14:12.432896 | orchestrator | ok: [testbed-manager] 2026-01-10 14:14:12.432907 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:14:12.432918 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:14:12.432928 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:14:12.432939 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:14:12.432950 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:14:12.432960 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:14:12.432971 | orchestrator | 2026-01-10 14:14:12.432981 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-01-10 14:14:12.432993 | orchestrator | Saturday 10 January 2026 14:13:58 +0000 (0:00:02.058) 0:03:20.198 ****** 2026-01-10 14:14:12.433003 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:14:12.433013 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:14:12.433024 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:14:12.433034 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:14:12.433045 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:14:12.433056 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:14:12.433066 | orchestrator | changed: [testbed-manager] 2026-01-10 14:14:12.433077 | orchestrator | 2026-01-10 14:14:12.433088 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-01-10 14:14:12.433100 | orchestrator | Saturday 10 January 2026 14:14:11 +0000 (0:00:12.891) 0:03:33.090 ****** 2026-01-10 14:14:12.433158 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-01-10 14:14:12.433198 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-01-10 14:14:12.433224 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-01-10 14:14:12.433238 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-10 14:14:12.433250 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-10 14:14:12.433262 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-01-10 14:14:12.433273 | orchestrator | 2026-01-10 14:14:12.433284 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-01-10 14:14:12.433295 | orchestrator | Saturday 10 January 2026 14:14:11 +0000 (0:00:00.458) 0:03:33.549 ****** 2026-01-10 14:14:12.433306 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-10 14:14:12.433317 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:14:12.433328 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-10 14:14:12.433338 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:14:12.433349 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-10 14:14:12.433360 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-10 14:14:12.433371 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:14:12.433382 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:14:12.433393 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-10 14:14:12.433404 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-10 14:14:12.433415 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-10 14:14:12.433448 | orchestrator | 2026-01-10 14:14:12.433468 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-01-10 14:14:12.433480 | orchestrator | Saturday 10 January 2026 14:14:12 +0000 (0:00:00.726) 0:03:34.275 ****** 2026-01-10 14:14:12.433490 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-10 14:14:12.433502 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-10 14:14:12.433514 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-10 14:14:12.433525 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-10 14:14:12.433535 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-10 14:14:12.433555 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-10 14:14:20.508117 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-10 14:14:20.508219 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-10 14:14:20.508227 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-10 14:14:20.508233 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-10 14:14:20.508238 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-10 14:14:20.508243 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-10 14:14:20.508247 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-10 14:14:20.508252 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-10 14:14:20.508257 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-10 14:14:20.508261 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-10 14:14:20.508266 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-10 14:14:20.508271 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:14:20.508276 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-10 14:14:20.508281 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-10 14:14:20.508285 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-10 14:14:20.508290 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-10 14:14:20.508294 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-10 14:14:20.508299 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-10 14:14:20.508303 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-10 14:14:20.508308 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-10 14:14:20.508312 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-10 14:14:20.508317 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-10 14:14:20.508321 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:14:20.508326 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-10 14:14:20.508330 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-10 14:14:20.508335 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-10 14:14:20.508339 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-10 14:14:20.508344 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-10 14:14:20.508348 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-10 14:14:20.508353 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-10 14:14:20.508357 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-10 14:14:20.508362 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-10 14:14:20.508366 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-10 14:14:20.508371 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-10 14:14:20.508380 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-10 14:14:20.508395 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-10 14:14:20.508400 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:14:20.508404 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:14:20.508409 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-10 14:14:20.508413 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-10 14:14:20.508439 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-10 14:14:20.508446 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-10 14:14:20.508450 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-10 14:14:20.508465 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-10 14:14:20.508470 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-10 14:14:20.508474 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-10 14:14:20.508479 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-10 14:14:20.508483 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-10 14:14:20.508488 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-10 14:14:20.508493 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-10 14:14:20.508497 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-10 14:14:20.508501 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-10 14:14:20.508506 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-10 14:14:20.508510 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-10 14:14:20.508515 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-10 14:14:20.508519 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-10 14:14:20.508524 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-10 14:14:20.508528 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-10 14:14:20.508532 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-10 14:14:20.508537 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-10 14:14:20.508542 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-10 14:14:20.508546 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-10 14:14:20.508551 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-10 14:14:20.508555 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-10 14:14:20.508560 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-10 14:14:20.508565 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-10 14:14:20.508569 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-10 14:14:20.508577 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-10 14:14:20.508582 | orchestrator | 2026-01-10 14:14:20.508588 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-01-10 14:14:20.508592 | orchestrator | Saturday 10 January 2026 14:14:18 +0000 (0:00:06.007) 0:03:40.282 ****** 2026-01-10 14:14:20.508597 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-10 14:14:20.508601 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-10 14:14:20.508606 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-10 14:14:20.508610 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-10 14:14:20.508614 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-10 14:14:20.508619 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-10 14:14:20.508623 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-10 14:14:20.508628 | orchestrator | 2026-01-10 14:14:20.508632 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-01-10 14:14:20.508652 | orchestrator | Saturday 10 January 2026 14:14:18 +0000 (0:00:00.608) 0:03:40.890 ****** 2026-01-10 14:14:20.508660 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-10 14:14:20.508664 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:14:20.508669 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-10 14:14:20.508674 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:14:20.508679 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-10 14:14:20.508684 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:14:20.508689 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-10 14:14:20.508694 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:14:20.508700 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-10 14:14:20.508705 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-10 14:14:20.508714 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-10 14:14:33.943493 | orchestrator | 2026-01-10 14:14:33.943625 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-01-10 14:14:33.943638 | orchestrator | Saturday 10 January 2026 14:14:20 +0000 (0:00:01.556) 0:03:42.447 ****** 2026-01-10 14:14:33.943648 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-10 14:14:33.943659 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:14:33.943669 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-10 14:14:33.943678 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:14:33.943687 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-10 14:14:33.943697 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-10 14:14:33.943705 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:14:33.943714 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:14:33.943722 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-10 14:14:33.943731 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-10 14:14:33.943740 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-10 14:14:33.943779 | orchestrator | 2026-01-10 14:14:33.943789 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-01-10 14:14:33.943797 | orchestrator | Saturday 10 January 2026 14:14:21 +0000 (0:00:00.621) 0:03:43.069 ****** 2026-01-10 14:14:33.943806 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-10 14:14:33.943814 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:14:33.943823 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-10 14:14:33.943832 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-10 14:14:33.943840 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:14:33.943849 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:14:33.943857 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-10 14:14:33.943867 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:14:33.943876 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-10 14:14:33.943884 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-10 14:14:33.943892 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-10 14:14:33.943901 | orchestrator | 2026-01-10 14:14:33.943910 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-01-10 14:14:33.943918 | orchestrator | Saturday 10 January 2026 14:14:21 +0000 (0:00:00.608) 0:03:43.677 ****** 2026-01-10 14:14:33.943927 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:14:33.943935 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:14:33.943944 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:14:33.943952 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:14:33.943961 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:14:33.943969 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:14:33.943979 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:14:33.943989 | orchestrator | 2026-01-10 14:14:33.943999 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-01-10 14:14:33.944008 | orchestrator | Saturday 10 January 2026 14:14:22 +0000 (0:00:00.315) 0:03:43.993 ****** 2026-01-10 14:14:33.944018 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:14:33.944029 | orchestrator | ok: [testbed-manager] 2026-01-10 14:14:33.944038 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:14:33.944048 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:14:33.944057 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:14:33.944067 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:14:33.944076 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:14:33.944086 | orchestrator | 2026-01-10 14:14:33.944095 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-01-10 14:14:33.944105 | orchestrator | Saturday 10 January 2026 14:14:27 +0000 (0:00:05.663) 0:03:49.656 ****** 2026-01-10 14:14:33.944115 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-01-10 14:14:33.944125 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:14:33.944135 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-01-10 14:14:33.944145 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:14:33.944155 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-01-10 14:14:33.944165 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:14:33.944175 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-01-10 14:14:33.944185 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-01-10 14:14:33.944195 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:14:33.944205 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:14:33.944234 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-01-10 14:14:33.944244 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:14:33.944261 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-01-10 14:14:33.944270 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:14:33.944280 | orchestrator | 2026-01-10 14:14:33.944290 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-01-10 14:14:33.944300 | orchestrator | Saturday 10 January 2026 14:14:27 +0000 (0:00:00.294) 0:03:49.951 ****** 2026-01-10 14:14:33.944310 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-01-10 14:14:33.944320 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-01-10 14:14:33.944331 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-01-10 14:14:33.944358 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-01-10 14:14:33.944368 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-01-10 14:14:33.944377 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-01-10 14:14:33.944385 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-01-10 14:14:33.944394 | orchestrator | 2026-01-10 14:14:33.944402 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-01-10 14:14:33.944432 | orchestrator | Saturday 10 January 2026 14:14:29 +0000 (0:00:01.049) 0:03:51.000 ****** 2026-01-10 14:14:33.944444 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:14:33.944456 | orchestrator | 2026-01-10 14:14:33.944464 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-01-10 14:14:33.944473 | orchestrator | Saturday 10 January 2026 14:14:29 +0000 (0:00:00.531) 0:03:51.532 ****** 2026-01-10 14:14:33.944482 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:14:33.944491 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:14:33.944499 | orchestrator | ok: [testbed-manager] 2026-01-10 14:14:33.944508 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:14:33.944517 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:14:33.944525 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:14:33.944534 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:14:33.944542 | orchestrator | 2026-01-10 14:14:33.944551 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-01-10 14:14:33.944560 | orchestrator | Saturday 10 January 2026 14:14:31 +0000 (0:00:01.460) 0:03:52.992 ****** 2026-01-10 14:14:33.944568 | orchestrator | ok: [testbed-manager] 2026-01-10 14:14:33.944577 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:14:33.944585 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:14:33.944594 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:14:33.944602 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:14:33.944611 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:14:33.944619 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:14:33.944627 | orchestrator | 2026-01-10 14:14:33.944636 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-01-10 14:14:33.944645 | orchestrator | Saturday 10 January 2026 14:14:31 +0000 (0:00:00.613) 0:03:53.605 ****** 2026-01-10 14:14:33.944653 | orchestrator | changed: [testbed-manager] 2026-01-10 14:14:33.944662 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:14:33.944671 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:14:33.944679 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:14:33.944688 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:14:33.944696 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:14:33.944705 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:14:33.944713 | orchestrator | 2026-01-10 14:14:33.944722 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-01-10 14:14:33.944731 | orchestrator | Saturday 10 January 2026 14:14:32 +0000 (0:00:00.606) 0:03:54.212 ****** 2026-01-10 14:14:33.944739 | orchestrator | ok: [testbed-manager] 2026-01-10 14:14:33.944748 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:14:33.944757 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:14:33.944765 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:14:33.944774 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:14:33.944789 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:14:33.944797 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:14:33.944806 | orchestrator | 2026-01-10 14:14:33.944814 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-01-10 14:14:33.944823 | orchestrator | Saturday 10 January 2026 14:14:32 +0000 (0:00:00.632) 0:03:54.844 ****** 2026-01-10 14:14:33.944835 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1768052871.1494703, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:14:33.944852 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1768052865.0401337, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:14:33.944862 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1768052872.5842035, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:14:33.944891 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1768052892.8065233, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:14:38.911008 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1768052888.3766847, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:14:38.911113 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1768052885.2478056, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:14:38.911128 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1768052866.4316297, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:14:38.911168 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:14:38.911196 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:14:38.911208 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:14:38.911219 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:14:38.911249 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:14:38.911262 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:14:38.911273 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:14:38.911292 | orchestrator | 2026-01-10 14:14:38.911305 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-01-10 14:14:38.911317 | orchestrator | Saturday 10 January 2026 14:14:33 +0000 (0:00:01.037) 0:03:55.882 ****** 2026-01-10 14:14:38.911328 | orchestrator | changed: [testbed-manager] 2026-01-10 14:14:38.911341 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:14:38.911351 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:14:38.911361 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:14:38.911372 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:14:38.911383 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:14:38.911393 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:14:38.911404 | orchestrator | 2026-01-10 14:14:38.911453 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-01-10 14:14:38.911464 | orchestrator | Saturday 10 January 2026 14:14:35 +0000 (0:00:01.149) 0:03:57.031 ****** 2026-01-10 14:14:38.911475 | orchestrator | changed: [testbed-manager] 2026-01-10 14:14:38.911486 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:14:38.911496 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:14:38.911507 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:14:38.911518 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:14:38.911528 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:14:38.911539 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:14:38.911551 | orchestrator | 2026-01-10 14:14:38.911563 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-01-10 14:14:38.911576 | orchestrator | Saturday 10 January 2026 14:14:36 +0000 (0:00:01.181) 0:03:58.213 ****** 2026-01-10 14:14:38.911589 | orchestrator | changed: [testbed-manager] 2026-01-10 14:14:38.911601 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:14:38.911614 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:14:38.911625 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:14:38.911638 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:14:38.911650 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:14:38.911661 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:14:38.911673 | orchestrator | 2026-01-10 14:14:38.911692 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-01-10 14:14:38.911705 | orchestrator | Saturday 10 January 2026 14:14:37 +0000 (0:00:01.181) 0:03:59.394 ****** 2026-01-10 14:14:38.911718 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:14:38.911730 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:14:38.911748 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:14:38.911766 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:14:38.911783 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:14:38.911801 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:14:38.911820 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:14:38.911837 | orchestrator | 2026-01-10 14:14:38.911855 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-01-10 14:14:38.911872 | orchestrator | Saturday 10 January 2026 14:14:37 +0000 (0:00:00.284) 0:03:59.679 ****** 2026-01-10 14:14:38.911890 | orchestrator | ok: [testbed-manager] 2026-01-10 14:14:38.911910 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:14:38.911928 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:14:38.911945 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:14:38.911963 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:14:38.911978 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:14:38.911989 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:14:38.911999 | orchestrator | 2026-01-10 14:14:38.912010 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-01-10 14:14:38.912030 | orchestrator | Saturday 10 January 2026 14:14:38 +0000 (0:00:00.789) 0:04:00.468 ****** 2026-01-10 14:14:38.912043 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:14:38.912057 | orchestrator | 2026-01-10 14:14:38.912067 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-01-10 14:14:38.912088 | orchestrator | Saturday 10 January 2026 14:14:38 +0000 (0:00:00.384) 0:04:00.852 ****** 2026-01-10 14:16:01.072598 | orchestrator | ok: [testbed-manager] 2026-01-10 14:16:01.072756 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:16:01.072771 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:16:01.072779 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:16:01.072788 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:16:01.072796 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:16:01.072804 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:16:01.072813 | orchestrator | 2026-01-10 14:16:01.072822 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-01-10 14:16:01.072832 | orchestrator | Saturday 10 January 2026 14:14:47 +0000 (0:00:08.923) 0:04:09.776 ****** 2026-01-10 14:16:01.072840 | orchestrator | ok: [testbed-manager] 2026-01-10 14:16:01.072848 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:16:01.072856 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:16:01.072864 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:16:01.072872 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:16:01.072880 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:16:01.072887 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:16:01.072895 | orchestrator | 2026-01-10 14:16:01.072903 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-01-10 14:16:01.072911 | orchestrator | Saturday 10 January 2026 14:14:49 +0000 (0:00:01.324) 0:04:11.101 ****** 2026-01-10 14:16:01.072919 | orchestrator | ok: [testbed-manager] 2026-01-10 14:16:01.072927 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:16:01.072935 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:16:01.072943 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:16:01.072950 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:16:01.072958 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:16:01.072966 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:16:01.072974 | orchestrator | 2026-01-10 14:16:01.072982 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-01-10 14:16:01.072990 | orchestrator | Saturday 10 January 2026 14:14:50 +0000 (0:00:01.222) 0:04:12.324 ****** 2026-01-10 14:16:01.072998 | orchestrator | ok: [testbed-manager] 2026-01-10 14:16:01.073006 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:16:01.073014 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:16:01.073022 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:16:01.073030 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:16:01.073038 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:16:01.073046 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:16:01.073053 | orchestrator | 2026-01-10 14:16:01.073062 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-01-10 14:16:01.073071 | orchestrator | Saturday 10 January 2026 14:14:50 +0000 (0:00:00.301) 0:04:12.626 ****** 2026-01-10 14:16:01.073085 | orchestrator | ok: [testbed-manager] 2026-01-10 14:16:01.073097 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:16:01.073110 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:16:01.073123 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:16:01.073135 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:16:01.073148 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:16:01.073161 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:16:01.073176 | orchestrator | 2026-01-10 14:16:01.073190 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-01-10 14:16:01.073206 | orchestrator | Saturday 10 January 2026 14:14:51 +0000 (0:00:00.329) 0:04:12.955 ****** 2026-01-10 14:16:01.073242 | orchestrator | ok: [testbed-manager] 2026-01-10 14:16:01.073252 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:16:01.073261 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:16:01.073271 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:16:01.073280 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:16:01.073290 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:16:01.073298 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:16:01.073308 | orchestrator | 2026-01-10 14:16:01.073317 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-01-10 14:16:01.073327 | orchestrator | Saturday 10 January 2026 14:14:51 +0000 (0:00:00.309) 0:04:13.265 ****** 2026-01-10 14:16:01.073336 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:16:01.073364 | orchestrator | ok: [testbed-manager] 2026-01-10 14:16:01.073373 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:16:01.073383 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:16:01.073393 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:16:01.073402 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:16:01.073411 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:16:01.073421 | orchestrator | 2026-01-10 14:16:01.073429 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-01-10 14:16:01.073437 | orchestrator | Saturday 10 January 2026 14:14:56 +0000 (0:00:05.566) 0:04:18.832 ****** 2026-01-10 14:16:01.073447 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:16:01.073456 | orchestrator | 2026-01-10 14:16:01.073465 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-01-10 14:16:01.073473 | orchestrator | Saturday 10 January 2026 14:14:57 +0000 (0:00:00.496) 0:04:19.328 ****** 2026-01-10 14:16:01.073481 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-01-10 14:16:01.073488 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-01-10 14:16:01.073496 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-01-10 14:16:01.073504 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-01-10 14:16:01.073512 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:16:01.073520 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-01-10 14:16:01.073538 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-01-10 14:16:01.073546 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:16:01.073554 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-01-10 14:16:01.073562 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-01-10 14:16:01.073570 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:16:01.073578 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-01-10 14:16:01.073585 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-01-10 14:16:01.073593 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:16:01.073601 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-01-10 14:16:01.073609 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-01-10 14:16:01.073670 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:16:01.073681 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:16:01.073689 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-01-10 14:16:01.073697 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-01-10 14:16:01.073704 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:16:01.073712 | orchestrator | 2026-01-10 14:16:01.073720 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-01-10 14:16:01.073728 | orchestrator | Saturday 10 January 2026 14:14:57 +0000 (0:00:00.395) 0:04:19.723 ****** 2026-01-10 14:16:01.073736 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:16:01.073752 | orchestrator | 2026-01-10 14:16:01.073760 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-01-10 14:16:01.073768 | orchestrator | Saturday 10 January 2026 14:14:58 +0000 (0:00:00.492) 0:04:20.216 ****** 2026-01-10 14:16:01.073776 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-01-10 14:16:01.073784 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-01-10 14:16:01.073792 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:16:01.073800 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-01-10 14:16:01.073808 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:16:01.073815 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:16:01.073823 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-01-10 14:16:01.073831 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-01-10 14:16:01.073839 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:16:01.073847 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:16:01.073854 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-01-10 14:16:01.073862 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:16:01.073870 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-01-10 14:16:01.073878 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:16:01.073886 | orchestrator | 2026-01-10 14:16:01.073893 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-01-10 14:16:01.073901 | orchestrator | Saturday 10 January 2026 14:14:58 +0000 (0:00:00.294) 0:04:20.510 ****** 2026-01-10 14:16:01.073909 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:16:01.073917 | orchestrator | 2026-01-10 14:16:01.073925 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-01-10 14:16:01.073933 | orchestrator | Saturday 10 January 2026 14:14:59 +0000 (0:00:00.477) 0:04:20.988 ****** 2026-01-10 14:16:01.073940 | orchestrator | changed: [testbed-manager] 2026-01-10 14:16:01.073948 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:16:01.073956 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:16:01.073964 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:16:01.073972 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:16:01.073979 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:16:01.073987 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:16:01.073995 | orchestrator | 2026-01-10 14:16:01.074003 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-01-10 14:16:01.074011 | orchestrator | Saturday 10 January 2026 14:15:35 +0000 (0:00:36.652) 0:04:57.640 ****** 2026-01-10 14:16:01.074084 | orchestrator | changed: [testbed-manager] 2026-01-10 14:16:01.074092 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:16:01.074100 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:16:01.074108 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:16:01.074120 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:16:01.074128 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:16:01.074136 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:16:01.074144 | orchestrator | 2026-01-10 14:16:01.074152 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-01-10 14:16:01.074160 | orchestrator | Saturday 10 January 2026 14:15:44 +0000 (0:00:09.184) 0:05:06.825 ****** 2026-01-10 14:16:01.074168 | orchestrator | changed: [testbed-manager] 2026-01-10 14:16:01.074176 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:16:01.074183 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:16:01.074191 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:16:01.074199 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:16:01.074207 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:16:01.074221 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:16:01.074229 | orchestrator | 2026-01-10 14:16:01.074237 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-01-10 14:16:01.074245 | orchestrator | Saturday 10 January 2026 14:15:52 +0000 (0:00:08.029) 0:05:14.855 ****** 2026-01-10 14:16:01.074253 | orchestrator | ok: [testbed-manager] 2026-01-10 14:16:01.074261 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:16:01.074268 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:16:01.074276 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:16:01.074284 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:16:01.074292 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:16:01.074300 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:16:01.074308 | orchestrator | 2026-01-10 14:16:01.074315 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-01-10 14:16:01.074324 | orchestrator | Saturday 10 January 2026 14:15:54 +0000 (0:00:01.859) 0:05:16.715 ****** 2026-01-10 14:16:01.074331 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:16:01.074339 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:16:01.074360 | orchestrator | changed: [testbed-manager] 2026-01-10 14:16:01.074369 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:16:01.074377 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:16:01.074385 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:16:01.074393 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:16:01.074401 | orchestrator | 2026-01-10 14:16:01.074415 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-01-10 14:16:13.334912 | orchestrator | Saturday 10 January 2026 14:16:01 +0000 (0:00:06.291) 0:05:23.007 ****** 2026-01-10 14:16:13.335087 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:16:13.335117 | orchestrator | 2026-01-10 14:16:13.335131 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-01-10 14:16:13.335186 | orchestrator | Saturday 10 January 2026 14:16:01 +0000 (0:00:00.595) 0:05:23.602 ****** 2026-01-10 14:16:13.335200 | orchestrator | changed: [testbed-manager] 2026-01-10 14:16:13.335212 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:16:13.335223 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:16:13.335234 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:16:13.335245 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:16:13.335256 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:16:13.335267 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:16:13.335278 | orchestrator | 2026-01-10 14:16:13.335288 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-01-10 14:16:13.335300 | orchestrator | Saturday 10 January 2026 14:16:02 +0000 (0:00:00.719) 0:05:24.321 ****** 2026-01-10 14:16:13.335310 | orchestrator | ok: [testbed-manager] 2026-01-10 14:16:13.335322 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:16:13.335332 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:16:13.335400 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:16:13.335411 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:16:13.335422 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:16:13.335435 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:16:13.335446 | orchestrator | 2026-01-10 14:16:13.335458 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-01-10 14:16:13.335471 | orchestrator | Saturday 10 January 2026 14:16:04 +0000 (0:00:01.766) 0:05:26.088 ****** 2026-01-10 14:16:13.335483 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:16:13.335495 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:16:13.335508 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:16:13.335520 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:16:13.335532 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:16:13.335544 | orchestrator | changed: [testbed-manager] 2026-01-10 14:16:13.335556 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:16:13.335568 | orchestrator | 2026-01-10 14:16:13.335606 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-01-10 14:16:13.335620 | orchestrator | Saturday 10 January 2026 14:16:05 +0000 (0:00:01.726) 0:05:27.814 ****** 2026-01-10 14:16:13.335632 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:16:13.335644 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:16:13.335657 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:16:13.335669 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:16:13.335681 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:16:13.335693 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:16:13.335705 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:16:13.335717 | orchestrator | 2026-01-10 14:16:13.335729 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-01-10 14:16:13.335741 | orchestrator | Saturday 10 January 2026 14:16:06 +0000 (0:00:00.291) 0:05:28.106 ****** 2026-01-10 14:16:13.335753 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:16:13.335765 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:16:13.335777 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:16:13.335790 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:16:13.335801 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:16:13.335813 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:16:13.335825 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:16:13.335837 | orchestrator | 2026-01-10 14:16:13.335848 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-01-10 14:16:13.335859 | orchestrator | Saturday 10 January 2026 14:16:06 +0000 (0:00:00.385) 0:05:28.492 ****** 2026-01-10 14:16:13.335869 | orchestrator | ok: [testbed-manager] 2026-01-10 14:16:13.335880 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:16:13.335890 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:16:13.335917 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:16:13.335929 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:16:13.335939 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:16:13.335950 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:16:13.335960 | orchestrator | 2026-01-10 14:16:13.335971 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-01-10 14:16:13.335982 | orchestrator | Saturday 10 January 2026 14:16:06 +0000 (0:00:00.292) 0:05:28.785 ****** 2026-01-10 14:16:13.335993 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:16:13.336004 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:16:13.336014 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:16:13.336025 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:16:13.336036 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:16:13.336046 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:16:13.336057 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:16:13.336068 | orchestrator | 2026-01-10 14:16:13.336079 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-01-10 14:16:13.336091 | orchestrator | Saturday 10 January 2026 14:16:07 +0000 (0:00:00.290) 0:05:29.076 ****** 2026-01-10 14:16:13.336102 | orchestrator | ok: [testbed-manager] 2026-01-10 14:16:13.336113 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:16:13.336124 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:16:13.336135 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:16:13.336146 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:16:13.336157 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:16:13.336168 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:16:13.336178 | orchestrator | 2026-01-10 14:16:13.336189 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-01-10 14:16:13.336200 | orchestrator | Saturday 10 January 2026 14:16:07 +0000 (0:00:00.323) 0:05:29.399 ****** 2026-01-10 14:16:13.336211 | orchestrator | ok: [testbed-manager] =>  2026-01-10 14:16:13.336222 | orchestrator |  docker_version: 5:27.5.1 2026-01-10 14:16:13.336233 | orchestrator | ok: [testbed-node-3] =>  2026-01-10 14:16:13.336248 | orchestrator |  docker_version: 5:27.5.1 2026-01-10 14:16:13.336271 | orchestrator | ok: [testbed-node-4] =>  2026-01-10 14:16:13.336314 | orchestrator |  docker_version: 5:27.5.1 2026-01-10 14:16:13.336332 | orchestrator | ok: [testbed-node-5] =>  2026-01-10 14:16:13.336378 | orchestrator |  docker_version: 5:27.5.1 2026-01-10 14:16:13.336423 | orchestrator | ok: [testbed-node-0] =>  2026-01-10 14:16:13.336441 | orchestrator |  docker_version: 5:27.5.1 2026-01-10 14:16:13.336457 | orchestrator | ok: [testbed-node-1] =>  2026-01-10 14:16:13.336474 | orchestrator |  docker_version: 5:27.5.1 2026-01-10 14:16:13.336490 | orchestrator | ok: [testbed-node-2] =>  2026-01-10 14:16:13.336507 | orchestrator |  docker_version: 5:27.5.1 2026-01-10 14:16:13.336522 | orchestrator | 2026-01-10 14:16:13.336538 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-01-10 14:16:13.336556 | orchestrator | Saturday 10 January 2026 14:16:07 +0000 (0:00:00.317) 0:05:29.717 ****** 2026-01-10 14:16:13.336574 | orchestrator | ok: [testbed-manager] =>  2026-01-10 14:16:13.336592 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-10 14:16:13.336610 | orchestrator | ok: [testbed-node-3] =>  2026-01-10 14:16:13.336628 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-10 14:16:13.336647 | orchestrator | ok: [testbed-node-4] =>  2026-01-10 14:16:13.336666 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-10 14:16:13.336679 | orchestrator | ok: [testbed-node-5] =>  2026-01-10 14:16:13.336689 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-10 14:16:13.336700 | orchestrator | ok: [testbed-node-0] =>  2026-01-10 14:16:13.336710 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-10 14:16:13.336721 | orchestrator | ok: [testbed-node-1] =>  2026-01-10 14:16:13.336732 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-10 14:16:13.336742 | orchestrator | ok: [testbed-node-2] =>  2026-01-10 14:16:13.336753 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-10 14:16:13.336771 | orchestrator | 2026-01-10 14:16:13.336789 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-01-10 14:16:13.336805 | orchestrator | Saturday 10 January 2026 14:16:08 +0000 (0:00:00.298) 0:05:30.015 ****** 2026-01-10 14:16:13.336822 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:16:13.336840 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:16:13.336857 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:16:13.336876 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:16:13.336893 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:16:13.336910 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:16:13.336927 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:16:13.336944 | orchestrator | 2026-01-10 14:16:13.336962 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-01-10 14:16:13.336980 | orchestrator | Saturday 10 January 2026 14:16:08 +0000 (0:00:00.263) 0:05:30.279 ****** 2026-01-10 14:16:13.336999 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:16:13.337016 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:16:13.337034 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:16:13.337050 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:16:13.337066 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:16:13.337083 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:16:13.337101 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:16:13.337118 | orchestrator | 2026-01-10 14:16:13.337137 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-01-10 14:16:13.337154 | orchestrator | Saturday 10 January 2026 14:16:08 +0000 (0:00:00.303) 0:05:30.582 ****** 2026-01-10 14:16:13.337175 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:16:13.337196 | orchestrator | 2026-01-10 14:16:13.337215 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-01-10 14:16:13.337232 | orchestrator | Saturday 10 January 2026 14:16:09 +0000 (0:00:00.417) 0:05:30.999 ****** 2026-01-10 14:16:13.337250 | orchestrator | ok: [testbed-manager] 2026-01-10 14:16:13.337268 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:16:13.337304 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:16:13.337323 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:16:13.337371 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:16:13.337390 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:16:13.337409 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:16:13.337426 | orchestrator | 2026-01-10 14:16:13.337443 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-01-10 14:16:13.337477 | orchestrator | Saturday 10 January 2026 14:16:10 +0000 (0:00:00.990) 0:05:31.990 ****** 2026-01-10 14:16:13.337496 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:16:13.337513 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:16:13.337530 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:16:13.337549 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:16:13.337566 | orchestrator | ok: [testbed-manager] 2026-01-10 14:16:13.337584 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:16:13.337602 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:16:13.337621 | orchestrator | 2026-01-10 14:16:13.337640 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-01-10 14:16:13.337660 | orchestrator | Saturday 10 January 2026 14:16:12 +0000 (0:00:02.902) 0:05:34.893 ****** 2026-01-10 14:16:13.337679 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-01-10 14:16:13.337699 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-01-10 14:16:13.337719 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-01-10 14:16:13.337736 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-01-10 14:16:13.337753 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-01-10 14:16:13.337772 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-01-10 14:16:13.337790 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:16:13.337808 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-01-10 14:16:13.337826 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-01-10 14:16:13.337844 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-01-10 14:16:13.337862 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:16:13.337880 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-01-10 14:16:13.337898 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-01-10 14:16:13.337916 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:16:13.337934 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-01-10 14:16:13.337952 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-01-10 14:16:13.337992 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-01-10 14:17:17.854563 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-01-10 14:17:17.854683 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:17:17.854701 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-01-10 14:17:17.854713 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:17:17.854724 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-01-10 14:17:17.854735 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-01-10 14:17:17.854746 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:17:17.854757 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-01-10 14:17:17.854768 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-01-10 14:17:17.854779 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-01-10 14:17:17.854790 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:17:17.854801 | orchestrator | 2026-01-10 14:17:17.854813 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-01-10 14:17:17.854826 | orchestrator | Saturday 10 January 2026 14:16:13 +0000 (0:00:00.611) 0:05:35.504 ****** 2026-01-10 14:17:17.854837 | orchestrator | ok: [testbed-manager] 2026-01-10 14:17:17.854849 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:17:17.854860 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:17:17.854895 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:17:17.854907 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:17:17.854917 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:17:17.854928 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:17:17.854938 | orchestrator | 2026-01-10 14:17:17.854949 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-01-10 14:17:17.854960 | orchestrator | Saturday 10 January 2026 14:16:20 +0000 (0:00:07.380) 0:05:42.885 ****** 2026-01-10 14:17:17.854971 | orchestrator | ok: [testbed-manager] 2026-01-10 14:17:17.854982 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:17:17.854992 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:17:17.855003 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:17:17.855013 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:17:17.855024 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:17:17.855035 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:17:17.855047 | orchestrator | 2026-01-10 14:17:17.855060 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-01-10 14:17:17.855071 | orchestrator | Saturday 10 January 2026 14:16:21 +0000 (0:00:01.031) 0:05:43.916 ****** 2026-01-10 14:17:17.855084 | orchestrator | ok: [testbed-manager] 2026-01-10 14:17:17.855096 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:17:17.855108 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:17:17.855119 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:17:17.855131 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:17:17.855143 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:17:17.855155 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:17:17.855167 | orchestrator | 2026-01-10 14:17:17.855179 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-01-10 14:17:17.855192 | orchestrator | Saturday 10 January 2026 14:16:30 +0000 (0:00:08.959) 0:05:52.876 ****** 2026-01-10 14:17:17.855204 | orchestrator | changed: [testbed-manager] 2026-01-10 14:17:17.855216 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:17:17.855228 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:17:17.855240 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:17:17.855252 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:17:17.855265 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:17:17.855305 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:17:17.855317 | orchestrator | 2026-01-10 14:17:17.855329 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-01-10 14:17:17.855342 | orchestrator | Saturday 10 January 2026 14:16:34 +0000 (0:00:03.526) 0:05:56.402 ****** 2026-01-10 14:17:17.855354 | orchestrator | ok: [testbed-manager] 2026-01-10 14:17:17.855366 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:17:17.855378 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:17:17.855390 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:17:17.855402 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:17:17.855413 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:17:17.855424 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:17:17.855435 | orchestrator | 2026-01-10 14:17:17.855446 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-01-10 14:17:17.855457 | orchestrator | Saturday 10 January 2026 14:16:35 +0000 (0:00:01.414) 0:05:57.817 ****** 2026-01-10 14:17:17.855468 | orchestrator | ok: [testbed-manager] 2026-01-10 14:17:17.855478 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:17:17.855489 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:17:17.855500 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:17:17.855511 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:17:17.855522 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:17:17.855532 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:17:17.855543 | orchestrator | 2026-01-10 14:17:17.855554 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-01-10 14:17:17.855565 | orchestrator | Saturday 10 January 2026 14:16:37 +0000 (0:00:01.520) 0:05:59.337 ****** 2026-01-10 14:17:17.855576 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:17:17.855594 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:17:17.855605 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:17:17.855616 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:17:17.855627 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:17:17.855637 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:17:17.855648 | orchestrator | changed: [testbed-manager] 2026-01-10 14:17:17.855658 | orchestrator | 2026-01-10 14:17:17.855669 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-01-10 14:17:17.855680 | orchestrator | Saturday 10 January 2026 14:16:38 +0000 (0:00:00.630) 0:05:59.968 ****** 2026-01-10 14:17:17.855690 | orchestrator | ok: [testbed-manager] 2026-01-10 14:17:17.855701 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:17:17.855712 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:17:17.855722 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:17:17.855733 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:17:17.855744 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:17:17.855754 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:17:17.855765 | orchestrator | 2026-01-10 14:17:17.855776 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-01-10 14:17:17.855803 | orchestrator | Saturday 10 January 2026 14:16:48 +0000 (0:00:10.777) 0:06:10.745 ****** 2026-01-10 14:17:17.855815 | orchestrator | changed: [testbed-manager] 2026-01-10 14:17:17.855826 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:17:17.855836 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:17:17.855847 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:17:17.855858 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:17:17.855868 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:17:17.855879 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:17:17.855890 | orchestrator | 2026-01-10 14:17:17.855901 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-01-10 14:17:17.855912 | orchestrator | Saturday 10 January 2026 14:16:49 +0000 (0:00:00.930) 0:06:11.675 ****** 2026-01-10 14:17:17.855922 | orchestrator | ok: [testbed-manager] 2026-01-10 14:17:17.855933 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:17:17.855943 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:17:17.855954 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:17:17.855965 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:17:17.855976 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:17:17.855986 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:17:17.855997 | orchestrator | 2026-01-10 14:17:17.856008 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-01-10 14:17:17.856018 | orchestrator | Saturday 10 January 2026 14:16:59 +0000 (0:00:09.700) 0:06:21.376 ****** 2026-01-10 14:17:17.856029 | orchestrator | ok: [testbed-manager] 2026-01-10 14:17:17.856040 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:17:17.856050 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:17:17.856061 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:17:17.856071 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:17:17.856082 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:17:17.856093 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:17:17.856103 | orchestrator | 2026-01-10 14:17:17.856114 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-01-10 14:17:17.856125 | orchestrator | Saturday 10 January 2026 14:17:11 +0000 (0:00:11.784) 0:06:33.160 ****** 2026-01-10 14:17:17.856135 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-01-10 14:17:17.856146 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-01-10 14:17:17.856157 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-01-10 14:17:17.856168 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-01-10 14:17:17.856178 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-01-10 14:17:17.856189 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-01-10 14:17:17.856199 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-01-10 14:17:17.856217 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-01-10 14:17:17.856228 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-01-10 14:17:17.856238 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-01-10 14:17:17.856249 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-01-10 14:17:17.856260 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-01-10 14:17:17.856397 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-01-10 14:17:17.856412 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-01-10 14:17:17.856423 | orchestrator | 2026-01-10 14:17:17.856434 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-01-10 14:17:17.856445 | orchestrator | Saturday 10 January 2026 14:17:12 +0000 (0:00:01.208) 0:06:34.369 ****** 2026-01-10 14:17:17.856455 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:17:17.856466 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:17:17.856477 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:17:17.856487 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:17:17.856498 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:17:17.856509 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:17:17.856520 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:17:17.856530 | orchestrator | 2026-01-10 14:17:17.856541 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-01-10 14:17:17.856552 | orchestrator | Saturday 10 January 2026 14:17:12 +0000 (0:00:00.563) 0:06:34.933 ****** 2026-01-10 14:17:17.856567 | orchestrator | ok: [testbed-manager] 2026-01-10 14:17:17.856579 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:17:17.856589 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:17:17.856600 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:17:17.856611 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:17:17.856621 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:17:17.856632 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:17:17.856643 | orchestrator | 2026-01-10 14:17:17.856659 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-01-10 14:17:17.856679 | orchestrator | Saturday 10 January 2026 14:17:16 +0000 (0:00:03.838) 0:06:38.771 ****** 2026-01-10 14:17:17.856698 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:17:17.856717 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:17:17.856735 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:17:17.856752 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:17:17.856763 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:17:17.856773 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:17:17.856784 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:17:17.856795 | orchestrator | 2026-01-10 14:17:17.856806 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-01-10 14:17:17.856817 | orchestrator | Saturday 10 January 2026 14:17:17 +0000 (0:00:00.513) 0:06:39.285 ****** 2026-01-10 14:17:17.856828 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-01-10 14:17:17.856839 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-01-10 14:17:17.856849 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:17:17.856860 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-01-10 14:17:17.856871 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-01-10 14:17:17.856881 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:17:17.856892 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-01-10 14:17:17.856903 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-01-10 14:17:17.856914 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:17:17.856933 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-01-10 14:17:37.969416 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-01-10 14:17:37.969532 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:17:37.969548 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-01-10 14:17:37.969587 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-01-10 14:17:37.969599 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:17:37.969610 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-01-10 14:17:37.969621 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-01-10 14:17:37.969631 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:17:37.969642 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-01-10 14:17:37.969652 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-01-10 14:17:37.969663 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:17:37.969674 | orchestrator | 2026-01-10 14:17:37.969687 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-01-10 14:17:37.969698 | orchestrator | Saturday 10 January 2026 14:17:18 +0000 (0:00:00.824) 0:06:40.109 ****** 2026-01-10 14:17:37.969709 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:17:37.969720 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:17:37.969731 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:17:37.969741 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:17:37.969752 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:17:37.969763 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:17:37.969773 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:17:37.969784 | orchestrator | 2026-01-10 14:17:37.969794 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-01-10 14:17:37.969806 | orchestrator | Saturday 10 January 2026 14:17:18 +0000 (0:00:00.546) 0:06:40.656 ****** 2026-01-10 14:17:37.969817 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:17:37.969828 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:17:37.969838 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:17:37.969849 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:17:37.969859 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:17:37.969870 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:17:37.969881 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:17:37.969891 | orchestrator | 2026-01-10 14:17:37.969902 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-01-10 14:17:37.969913 | orchestrator | Saturday 10 January 2026 14:17:19 +0000 (0:00:00.512) 0:06:41.169 ****** 2026-01-10 14:17:37.969923 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:17:37.969934 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:17:37.969945 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:17:37.969955 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:17:37.969966 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:17:37.969976 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:17:37.969987 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:17:37.969998 | orchestrator | 2026-01-10 14:17:37.970009 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-01-10 14:17:37.970089 | orchestrator | Saturday 10 January 2026 14:17:19 +0000 (0:00:00.578) 0:06:41.747 ****** 2026-01-10 14:17:37.970101 | orchestrator | ok: [testbed-manager] 2026-01-10 14:17:37.970112 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:17:37.970123 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:17:37.970134 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:17:37.970144 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:17:37.970155 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:17:37.970165 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:17:37.970176 | orchestrator | 2026-01-10 14:17:37.970187 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-01-10 14:17:37.970198 | orchestrator | Saturday 10 January 2026 14:17:21 +0000 (0:00:02.056) 0:06:43.804 ****** 2026-01-10 14:17:37.970211 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:17:37.970275 | orchestrator | 2026-01-10 14:17:37.970288 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-01-10 14:17:37.970299 | orchestrator | Saturday 10 January 2026 14:17:22 +0000 (0:00:00.920) 0:06:44.724 ****** 2026-01-10 14:17:37.970310 | orchestrator | ok: [testbed-manager] 2026-01-10 14:17:37.970321 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:17:37.970332 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:17:37.970342 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:17:37.970353 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:17:37.970364 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:17:37.970375 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:17:37.970385 | orchestrator | 2026-01-10 14:17:37.970396 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-01-10 14:17:37.970407 | orchestrator | Saturday 10 January 2026 14:17:23 +0000 (0:00:00.973) 0:06:45.698 ****** 2026-01-10 14:17:37.970417 | orchestrator | ok: [testbed-manager] 2026-01-10 14:17:37.970428 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:17:37.970439 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:17:37.970449 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:17:37.970460 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:17:37.970470 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:17:37.970481 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:17:37.970491 | orchestrator | 2026-01-10 14:17:37.970503 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-01-10 14:17:37.970513 | orchestrator | Saturday 10 January 2026 14:17:24 +0000 (0:00:00.892) 0:06:46.590 ****** 2026-01-10 14:17:37.970524 | orchestrator | ok: [testbed-manager] 2026-01-10 14:17:37.970535 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:17:37.970545 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:17:37.970556 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:17:37.970567 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:17:37.970577 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:17:37.970587 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:17:37.970598 | orchestrator | 2026-01-10 14:17:37.970609 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-01-10 14:17:37.970641 | orchestrator | Saturday 10 January 2026 14:17:26 +0000 (0:00:01.615) 0:06:48.206 ****** 2026-01-10 14:17:37.970653 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:17:37.970663 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:17:37.970674 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:17:37.970685 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:17:37.970696 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:17:37.970706 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:17:37.970717 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:17:37.970728 | orchestrator | 2026-01-10 14:17:37.970739 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-01-10 14:17:37.970750 | orchestrator | Saturday 10 January 2026 14:17:27 +0000 (0:00:01.421) 0:06:49.628 ****** 2026-01-10 14:17:37.970761 | orchestrator | ok: [testbed-manager] 2026-01-10 14:17:37.970771 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:17:37.970782 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:17:37.970793 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:17:37.970804 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:17:37.970815 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:17:37.970826 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:17:37.970836 | orchestrator | 2026-01-10 14:17:37.970847 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-01-10 14:17:37.970858 | orchestrator | Saturday 10 January 2026 14:17:29 +0000 (0:00:01.507) 0:06:51.135 ****** 2026-01-10 14:17:37.970869 | orchestrator | changed: [testbed-manager] 2026-01-10 14:17:37.970880 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:17:37.970891 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:17:37.970902 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:17:37.970913 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:17:37.970931 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:17:37.970942 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:17:37.970953 | orchestrator | 2026-01-10 14:17:37.970964 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-01-10 14:17:37.970975 | orchestrator | Saturday 10 January 2026 14:17:30 +0000 (0:00:01.422) 0:06:52.558 ****** 2026-01-10 14:17:37.970987 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:17:37.970998 | orchestrator | 2026-01-10 14:17:37.971009 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-01-10 14:17:37.971019 | orchestrator | Saturday 10 January 2026 14:17:31 +0000 (0:00:01.043) 0:06:53.601 ****** 2026-01-10 14:17:37.971030 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:17:37.971041 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:17:37.971052 | orchestrator | ok: [testbed-manager] 2026-01-10 14:17:37.971063 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:17:37.971073 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:17:37.971084 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:17:37.971095 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:17:37.971106 | orchestrator | 2026-01-10 14:17:37.971117 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-01-10 14:17:37.971128 | orchestrator | Saturday 10 January 2026 14:17:33 +0000 (0:00:01.428) 0:06:55.030 ****** 2026-01-10 14:17:37.971139 | orchestrator | ok: [testbed-manager] 2026-01-10 14:17:37.971150 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:17:37.971160 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:17:37.971171 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:17:37.971182 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:17:37.971192 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:17:37.971203 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:17:37.971214 | orchestrator | 2026-01-10 14:17:37.971225 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-01-10 14:17:37.971236 | orchestrator | Saturday 10 January 2026 14:17:34 +0000 (0:00:01.147) 0:06:56.177 ****** 2026-01-10 14:17:37.971294 | orchestrator | ok: [testbed-manager] 2026-01-10 14:17:37.971305 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:17:37.971316 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:17:37.971327 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:17:37.971337 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:17:37.971348 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:17:37.971374 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:17:37.971385 | orchestrator | 2026-01-10 14:17:37.971396 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-01-10 14:17:37.971407 | orchestrator | Saturday 10 January 2026 14:17:35 +0000 (0:00:01.127) 0:06:57.305 ****** 2026-01-10 14:17:37.971418 | orchestrator | ok: [testbed-manager] 2026-01-10 14:17:37.971428 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:17:37.971439 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:17:37.971449 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:17:37.971460 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:17:37.971471 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:17:37.971481 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:17:37.971492 | orchestrator | 2026-01-10 14:17:37.971502 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-01-10 14:17:37.971513 | orchestrator | Saturday 10 January 2026 14:17:36 +0000 (0:00:01.430) 0:06:58.735 ****** 2026-01-10 14:17:37.971524 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:17:37.971535 | orchestrator | 2026-01-10 14:17:37.971546 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-10 14:17:37.971627 | orchestrator | Saturday 10 January 2026 14:17:37 +0000 (0:00:00.879) 0:06:59.615 ****** 2026-01-10 14:17:37.971648 | orchestrator | 2026-01-10 14:17:37.971659 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-10 14:17:37.971670 | orchestrator | Saturday 10 January 2026 14:17:37 +0000 (0:00:00.039) 0:06:59.655 ****** 2026-01-10 14:17:37.971681 | orchestrator | 2026-01-10 14:17:37.971692 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-10 14:17:37.971703 | orchestrator | Saturday 10 January 2026 14:17:37 +0000 (0:00:00.041) 0:06:59.696 ****** 2026-01-10 14:17:37.971713 | orchestrator | 2026-01-10 14:17:37.971724 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-10 14:17:37.971745 | orchestrator | Saturday 10 January 2026 14:17:37 +0000 (0:00:00.045) 0:06:59.741 ****** 2026-01-10 14:18:04.982918 | orchestrator | 2026-01-10 14:18:04.983054 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-10 14:18:04.983109 | orchestrator | Saturday 10 January 2026 14:17:37 +0000 (0:00:00.039) 0:06:59.781 ****** 2026-01-10 14:18:04.983137 | orchestrator | 2026-01-10 14:18:04.983156 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-10 14:18:04.983173 | orchestrator | Saturday 10 January 2026 14:17:37 +0000 (0:00:00.038) 0:06:59.819 ****** 2026-01-10 14:18:04.983192 | orchestrator | 2026-01-10 14:18:04.983241 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-10 14:18:04.983261 | orchestrator | Saturday 10 January 2026 14:17:37 +0000 (0:00:00.045) 0:06:59.865 ****** 2026-01-10 14:18:04.983280 | orchestrator | 2026-01-10 14:18:04.983296 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-10 14:18:04.983307 | orchestrator | Saturday 10 January 2026 14:17:37 +0000 (0:00:00.042) 0:06:59.907 ****** 2026-01-10 14:18:04.983319 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:18:04.983332 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:18:04.983343 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:18:04.983354 | orchestrator | 2026-01-10 14:18:04.983365 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-01-10 14:18:04.983376 | orchestrator | Saturday 10 January 2026 14:17:39 +0000 (0:00:01.219) 0:07:01.127 ****** 2026-01-10 14:18:04.983389 | orchestrator | changed: [testbed-manager] 2026-01-10 14:18:04.983408 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:18:04.983426 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:18:04.983446 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:18:04.983463 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:18:04.983482 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:18:04.983501 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:18:04.983520 | orchestrator | 2026-01-10 14:18:04.983539 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-01-10 14:18:04.983557 | orchestrator | Saturday 10 January 2026 14:17:40 +0000 (0:00:01.499) 0:07:02.626 ****** 2026-01-10 14:18:04.983576 | orchestrator | changed: [testbed-manager] 2026-01-10 14:18:04.983595 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:18:04.983615 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:18:04.983635 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:18:04.983654 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:18:04.983671 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:18:04.983684 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:18:04.983696 | orchestrator | 2026-01-10 14:18:04.983709 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-01-10 14:18:04.983722 | orchestrator | Saturday 10 January 2026 14:17:41 +0000 (0:00:01.263) 0:07:03.889 ****** 2026-01-10 14:18:04.983735 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:18:04.983748 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:18:04.983760 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:18:04.983773 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:18:04.983785 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:18:04.983796 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:18:04.983807 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:18:04.983849 | orchestrator | 2026-01-10 14:18:04.983860 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-01-10 14:18:04.983871 | orchestrator | Saturday 10 January 2026 14:17:44 +0000 (0:00:02.416) 0:07:06.306 ****** 2026-01-10 14:18:04.983882 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:18:04.983893 | orchestrator | 2026-01-10 14:18:04.983904 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-01-10 14:18:04.983915 | orchestrator | Saturday 10 January 2026 14:17:44 +0000 (0:00:00.121) 0:07:06.427 ****** 2026-01-10 14:18:04.983926 | orchestrator | ok: [testbed-manager] 2026-01-10 14:18:04.983936 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:18:04.983947 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:18:04.983958 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:18:04.983968 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:18:04.983979 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:18:04.984005 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:18:04.984022 | orchestrator | 2026-01-10 14:18:04.984049 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-01-10 14:18:04.984072 | orchestrator | Saturday 10 January 2026 14:17:45 +0000 (0:00:01.054) 0:07:07.482 ****** 2026-01-10 14:18:04.984089 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:18:04.984107 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:18:04.984123 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:18:04.984141 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:18:04.984158 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:18:04.984176 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:18:04.984194 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:18:04.984260 | orchestrator | 2026-01-10 14:18:04.984278 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-01-10 14:18:04.984296 | orchestrator | Saturday 10 January 2026 14:17:46 +0000 (0:00:00.569) 0:07:08.052 ****** 2026-01-10 14:18:04.984316 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:18:04.984338 | orchestrator | 2026-01-10 14:18:04.984357 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-01-10 14:18:04.984374 | orchestrator | Saturday 10 January 2026 14:17:47 +0000 (0:00:01.103) 0:07:09.155 ****** 2026-01-10 14:18:04.984392 | orchestrator | ok: [testbed-manager] 2026-01-10 14:18:04.984403 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:18:04.984414 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:18:04.984425 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:18:04.984436 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:18:04.984447 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:18:04.984457 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:18:04.984468 | orchestrator | 2026-01-10 14:18:04.984479 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-01-10 14:18:04.984490 | orchestrator | Saturday 10 January 2026 14:17:48 +0000 (0:00:00.884) 0:07:10.040 ****** 2026-01-10 14:18:04.984501 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-01-10 14:18:04.984533 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-01-10 14:18:04.984545 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-01-10 14:18:04.984556 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-01-10 14:18:04.984567 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-01-10 14:18:04.984578 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-01-10 14:18:04.984588 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-01-10 14:18:04.984599 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-01-10 14:18:04.984610 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-01-10 14:18:04.984620 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-01-10 14:18:04.984653 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-01-10 14:18:04.984664 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-01-10 14:18:04.984675 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-01-10 14:18:04.984685 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-01-10 14:18:04.984696 | orchestrator | 2026-01-10 14:18:04.984707 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-01-10 14:18:04.984717 | orchestrator | Saturday 10 January 2026 14:17:50 +0000 (0:00:02.523) 0:07:12.564 ****** 2026-01-10 14:18:04.984728 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:18:04.984739 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:18:04.984750 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:18:04.984760 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:18:04.984770 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:18:04.984781 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:18:04.984792 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:18:04.984802 | orchestrator | 2026-01-10 14:18:04.984813 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-01-10 14:18:04.984824 | orchestrator | Saturday 10 January 2026 14:17:51 +0000 (0:00:00.688) 0:07:13.253 ****** 2026-01-10 14:18:04.984836 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:18:04.984850 | orchestrator | 2026-01-10 14:18:04.984861 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-01-10 14:18:04.984871 | orchestrator | Saturday 10 January 2026 14:17:52 +0000 (0:00:00.802) 0:07:14.055 ****** 2026-01-10 14:18:04.984882 | orchestrator | ok: [testbed-manager] 2026-01-10 14:18:04.984893 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:18:04.984903 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:18:04.984914 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:18:04.984925 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:18:04.984935 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:18:04.984946 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:18:04.984956 | orchestrator | 2026-01-10 14:18:04.984967 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-01-10 14:18:04.984978 | orchestrator | Saturday 10 January 2026 14:17:53 +0000 (0:00:00.897) 0:07:14.953 ****** 2026-01-10 14:18:04.984989 | orchestrator | ok: [testbed-manager] 2026-01-10 14:18:04.984999 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:18:04.985010 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:18:04.985020 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:18:04.985031 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:18:04.985041 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:18:04.985052 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:18:04.985063 | orchestrator | 2026-01-10 14:18:04.985073 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-01-10 14:18:04.985084 | orchestrator | Saturday 10 January 2026 14:17:54 +0000 (0:00:01.077) 0:07:16.030 ****** 2026-01-10 14:18:04.985103 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:18:04.985114 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:18:04.985124 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:18:04.985135 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:18:04.985146 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:18:04.985156 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:18:04.985167 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:18:04.985178 | orchestrator | 2026-01-10 14:18:04.985188 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-01-10 14:18:04.985199 | orchestrator | Saturday 10 January 2026 14:17:54 +0000 (0:00:00.500) 0:07:16.530 ****** 2026-01-10 14:18:04.985240 | orchestrator | ok: [testbed-manager] 2026-01-10 14:18:04.985251 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:18:04.985276 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:18:04.985299 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:18:04.985325 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:18:04.985342 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:18:04.985359 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:18:04.985376 | orchestrator | 2026-01-10 14:18:04.985394 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-01-10 14:18:04.985411 | orchestrator | Saturday 10 January 2026 14:17:56 +0000 (0:00:01.566) 0:07:18.097 ****** 2026-01-10 14:18:04.985430 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:18:04.985450 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:18:04.985468 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:18:04.985487 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:18:04.985500 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:18:04.985511 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:18:04.985522 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:18:04.985533 | orchestrator | 2026-01-10 14:18:04.985544 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-01-10 14:18:04.985555 | orchestrator | Saturday 10 January 2026 14:17:56 +0000 (0:00:00.499) 0:07:18.597 ****** 2026-01-10 14:18:04.985566 | orchestrator | ok: [testbed-manager] 2026-01-10 14:18:04.985577 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:18:04.985587 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:18:04.985598 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:18:04.985609 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:18:04.985620 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:18:04.985641 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:18:38.722854 | orchestrator | 2026-01-10 14:18:38.723002 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-01-10 14:18:38.723025 | orchestrator | Saturday 10 January 2026 14:18:04 +0000 (0:00:08.323) 0:07:26.921 ****** 2026-01-10 14:18:38.723042 | orchestrator | ok: [testbed-manager] 2026-01-10 14:18:38.723060 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:18:38.723078 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:18:38.723094 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:18:38.723111 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:18:38.723127 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:18:38.723143 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:18:38.723220 | orchestrator | 2026-01-10 14:18:38.723241 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-01-10 14:18:38.723259 | orchestrator | Saturday 10 January 2026 14:18:06 +0000 (0:00:01.607) 0:07:28.528 ****** 2026-01-10 14:18:38.723276 | orchestrator | ok: [testbed-manager] 2026-01-10 14:18:38.723294 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:18:38.723312 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:18:38.723333 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:18:38.723353 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:18:38.723372 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:18:38.723392 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:18:38.723411 | orchestrator | 2026-01-10 14:18:38.723431 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-01-10 14:18:38.723450 | orchestrator | Saturday 10 January 2026 14:18:08 +0000 (0:00:01.796) 0:07:30.324 ****** 2026-01-10 14:18:38.723469 | orchestrator | ok: [testbed-manager] 2026-01-10 14:18:38.723487 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:18:38.723505 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:18:38.723522 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:18:38.723539 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:18:38.723556 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:18:38.723572 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:18:38.723591 | orchestrator | 2026-01-10 14:18:38.723610 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-10 14:18:38.723628 | orchestrator | Saturday 10 January 2026 14:18:10 +0000 (0:00:01.664) 0:07:31.988 ****** 2026-01-10 14:18:38.723725 | orchestrator | ok: [testbed-manager] 2026-01-10 14:18:38.723747 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:18:38.723764 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:18:38.723781 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:18:38.723798 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:18:38.723814 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:18:38.723831 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:18:38.723847 | orchestrator | 2026-01-10 14:18:38.723862 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-10 14:18:38.723878 | orchestrator | Saturday 10 January 2026 14:18:10 +0000 (0:00:00.881) 0:07:32.870 ****** 2026-01-10 14:18:38.723892 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:18:38.723909 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:18:38.723925 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:18:38.723941 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:18:38.723957 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:18:38.723973 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:18:38.723983 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:18:38.723993 | orchestrator | 2026-01-10 14:18:38.724003 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-01-10 14:18:38.724012 | orchestrator | Saturday 10 January 2026 14:18:11 +0000 (0:00:01.020) 0:07:33.891 ****** 2026-01-10 14:18:38.724022 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:18:38.724032 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:18:38.724041 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:18:38.724051 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:18:38.724060 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:18:38.724069 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:18:38.724079 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:18:38.724088 | orchestrator | 2026-01-10 14:18:38.724097 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-01-10 14:18:38.724107 | orchestrator | Saturday 10 January 2026 14:18:12 +0000 (0:00:00.583) 0:07:34.475 ****** 2026-01-10 14:18:38.724117 | orchestrator | ok: [testbed-manager] 2026-01-10 14:18:38.724126 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:18:38.724136 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:18:38.724145 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:18:38.724201 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:18:38.724212 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:18:38.724222 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:18:38.724231 | orchestrator | 2026-01-10 14:18:38.724241 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-01-10 14:18:38.724251 | orchestrator | Saturday 10 January 2026 14:18:13 +0000 (0:00:00.555) 0:07:35.030 ****** 2026-01-10 14:18:38.724261 | orchestrator | ok: [testbed-manager] 2026-01-10 14:18:38.724271 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:18:38.724280 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:18:38.724289 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:18:38.724299 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:18:38.724308 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:18:38.724318 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:18:38.724327 | orchestrator | 2026-01-10 14:18:38.724337 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-01-10 14:18:38.724348 | orchestrator | Saturday 10 January 2026 14:18:13 +0000 (0:00:00.524) 0:07:35.554 ****** 2026-01-10 14:18:38.724365 | orchestrator | ok: [testbed-manager] 2026-01-10 14:18:38.724379 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:18:38.724394 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:18:38.724409 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:18:38.724425 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:18:38.724440 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:18:38.724456 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:18:38.724472 | orchestrator | 2026-01-10 14:18:38.724487 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-01-10 14:18:38.724522 | orchestrator | Saturday 10 January 2026 14:18:14 +0000 (0:00:00.724) 0:07:36.278 ****** 2026-01-10 14:18:38.724540 | orchestrator | ok: [testbed-manager] 2026-01-10 14:18:38.724556 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:18:38.724565 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:18:38.724575 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:18:38.724584 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:18:38.724594 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:18:38.724603 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:18:38.724612 | orchestrator | 2026-01-10 14:18:38.724650 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-01-10 14:18:38.724667 | orchestrator | Saturday 10 January 2026 14:18:20 +0000 (0:00:05.698) 0:07:41.977 ****** 2026-01-10 14:18:38.724684 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:18:38.724700 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:18:38.724716 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:18:38.724732 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:18:38.724747 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:18:38.724762 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:18:38.724778 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:18:38.724794 | orchestrator | 2026-01-10 14:18:38.724810 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-01-10 14:18:38.724826 | orchestrator | Saturday 10 January 2026 14:18:20 +0000 (0:00:00.521) 0:07:42.498 ****** 2026-01-10 14:18:38.724845 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:18:38.724858 | orchestrator | 2026-01-10 14:18:38.724868 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-01-10 14:18:38.724878 | orchestrator | Saturday 10 January 2026 14:18:21 +0000 (0:00:01.016) 0:07:43.515 ****** 2026-01-10 14:18:38.724887 | orchestrator | ok: [testbed-manager] 2026-01-10 14:18:38.724897 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:18:38.724906 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:18:38.724916 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:18:38.724925 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:18:38.724935 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:18:38.724944 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:18:38.724953 | orchestrator | 2026-01-10 14:18:38.724963 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-01-10 14:18:38.724972 | orchestrator | Saturday 10 January 2026 14:18:23 +0000 (0:00:01.956) 0:07:45.472 ****** 2026-01-10 14:18:38.724982 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:18:38.724991 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:18:38.725001 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:18:38.725010 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:18:38.725019 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:18:38.725029 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:18:38.725038 | orchestrator | ok: [testbed-manager] 2026-01-10 14:18:38.725047 | orchestrator | 2026-01-10 14:18:38.725057 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-01-10 14:18:38.725067 | orchestrator | Saturday 10 January 2026 14:18:25 +0000 (0:00:01.698) 0:07:47.170 ****** 2026-01-10 14:18:38.725076 | orchestrator | ok: [testbed-manager] 2026-01-10 14:18:38.725085 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:18:38.725095 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:18:38.725104 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:18:38.725113 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:18:38.725123 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:18:38.725135 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:18:38.725150 | orchestrator | 2026-01-10 14:18:38.725204 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-01-10 14:18:38.725222 | orchestrator | Saturday 10 January 2026 14:18:26 +0000 (0:00:00.871) 0:07:48.041 ****** 2026-01-10 14:18:38.725239 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-10 14:18:38.725273 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-10 14:18:38.725290 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-10 14:18:38.725313 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-10 14:18:38.725323 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-10 14:18:38.725332 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-10 14:18:38.725342 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-10 14:18:38.725351 | orchestrator | 2026-01-10 14:18:38.725361 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-01-10 14:18:38.725370 | orchestrator | Saturday 10 January 2026 14:18:28 +0000 (0:00:01.942) 0:07:49.984 ****** 2026-01-10 14:18:38.725380 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:18:38.725390 | orchestrator | 2026-01-10 14:18:38.725400 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-01-10 14:18:38.725409 | orchestrator | Saturday 10 January 2026 14:18:28 +0000 (0:00:00.817) 0:07:50.801 ****** 2026-01-10 14:18:38.725419 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:18:38.725428 | orchestrator | changed: [testbed-manager] 2026-01-10 14:18:38.725438 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:18:38.725448 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:18:38.725464 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:18:38.725478 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:18:38.725492 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:18:38.725509 | orchestrator | 2026-01-10 14:18:38.725537 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-01-10 14:19:10.964969 | orchestrator | Saturday 10 January 2026 14:18:38 +0000 (0:00:09.860) 0:08:00.662 ****** 2026-01-10 14:19:10.965071 | orchestrator | ok: [testbed-manager] 2026-01-10 14:19:10.965087 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:19:10.965098 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:19:10.965142 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:19:10.965162 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:19:10.965180 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:19:10.965198 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:19:10.965218 | orchestrator | 2026-01-10 14:19:10.965238 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-01-10 14:19:10.965258 | orchestrator | Saturday 10 January 2026 14:18:40 +0000 (0:00:02.039) 0:08:02.702 ****** 2026-01-10 14:19:10.965276 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:19:10.965288 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:19:10.965299 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:19:10.965310 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:19:10.965320 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:19:10.965331 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:19:10.965342 | orchestrator | 2026-01-10 14:19:10.965353 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-01-10 14:19:10.965364 | orchestrator | Saturday 10 January 2026 14:18:42 +0000 (0:00:01.438) 0:08:04.140 ****** 2026-01-10 14:19:10.965375 | orchestrator | changed: [testbed-manager] 2026-01-10 14:19:10.965387 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:19:10.965423 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:19:10.965434 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:19:10.965445 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:19:10.965455 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:19:10.965466 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:19:10.965476 | orchestrator | 2026-01-10 14:19:10.965487 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-01-10 14:19:10.965498 | orchestrator | 2026-01-10 14:19:10.965510 | orchestrator | TASK [Include hardening role] ************************************************** 2026-01-10 14:19:10.965523 | orchestrator | Saturday 10 January 2026 14:18:43 +0000 (0:00:01.305) 0:08:05.446 ****** 2026-01-10 14:19:10.965535 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:19:10.965547 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:19:10.965559 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:19:10.965572 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:19:10.965583 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:19:10.965596 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:19:10.965608 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:19:10.965620 | orchestrator | 2026-01-10 14:19:10.965632 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-01-10 14:19:10.965644 | orchestrator | 2026-01-10 14:19:10.965657 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-01-10 14:19:10.965669 | orchestrator | Saturday 10 January 2026 14:18:44 +0000 (0:00:00.736) 0:08:06.182 ****** 2026-01-10 14:19:10.965681 | orchestrator | changed: [testbed-manager] 2026-01-10 14:19:10.965693 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:19:10.965705 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:19:10.965717 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:19:10.965729 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:19:10.965741 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:19:10.965753 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:19:10.965765 | orchestrator | 2026-01-10 14:19:10.965777 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-01-10 14:19:10.965789 | orchestrator | Saturday 10 January 2026 14:18:45 +0000 (0:00:01.493) 0:08:07.676 ****** 2026-01-10 14:19:10.965801 | orchestrator | ok: [testbed-manager] 2026-01-10 14:19:10.965814 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:19:10.965826 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:19:10.965838 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:19:10.965850 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:19:10.965861 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:19:10.965872 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:19:10.965882 | orchestrator | 2026-01-10 14:19:10.965893 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-01-10 14:19:10.965917 | orchestrator | Saturday 10 January 2026 14:18:47 +0000 (0:00:01.509) 0:08:09.185 ****** 2026-01-10 14:19:10.965929 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:19:10.965939 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:19:10.965950 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:19:10.965960 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:19:10.965971 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:19:10.965982 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:19:10.965992 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:19:10.966003 | orchestrator | 2026-01-10 14:19:10.966014 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-01-10 14:19:10.966066 | orchestrator | Saturday 10 January 2026 14:18:47 +0000 (0:00:00.488) 0:08:09.673 ****** 2026-01-10 14:19:10.966078 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:19:10.966090 | orchestrator | 2026-01-10 14:19:10.966101 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-01-10 14:19:10.966136 | orchestrator | Saturday 10 January 2026 14:18:48 +0000 (0:00:01.037) 0:08:10.711 ****** 2026-01-10 14:19:10.966159 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:19:10.966173 | orchestrator | 2026-01-10 14:19:10.966183 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-01-10 14:19:10.966196 | orchestrator | Saturday 10 January 2026 14:18:49 +0000 (0:00:00.824) 0:08:11.536 ****** 2026-01-10 14:19:10.966214 | orchestrator | changed: [testbed-manager] 2026-01-10 14:19:10.966232 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:19:10.966251 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:19:10.966271 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:19:10.966291 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:19:10.966309 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:19:10.966328 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:19:10.966341 | orchestrator | 2026-01-10 14:19:10.966370 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-01-10 14:19:10.966381 | orchestrator | Saturday 10 January 2026 14:18:59 +0000 (0:00:09.499) 0:08:21.035 ****** 2026-01-10 14:19:10.966392 | orchestrator | changed: [testbed-manager] 2026-01-10 14:19:10.966402 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:19:10.966412 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:19:10.966423 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:19:10.966433 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:19:10.966444 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:19:10.966454 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:19:10.966465 | orchestrator | 2026-01-10 14:19:10.966475 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-01-10 14:19:10.966486 | orchestrator | Saturday 10 January 2026 14:19:00 +0000 (0:00:01.059) 0:08:22.094 ****** 2026-01-10 14:19:10.966496 | orchestrator | changed: [testbed-manager] 2026-01-10 14:19:10.966507 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:19:10.966517 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:19:10.966527 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:19:10.966538 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:19:10.966548 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:19:10.966558 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:19:10.966569 | orchestrator | 2026-01-10 14:19:10.966579 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-01-10 14:19:10.966590 | orchestrator | Saturday 10 January 2026 14:19:01 +0000 (0:00:01.369) 0:08:23.464 ****** 2026-01-10 14:19:10.966600 | orchestrator | changed: [testbed-manager] 2026-01-10 14:19:10.966611 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:19:10.966621 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:19:10.966632 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:19:10.966642 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:19:10.966652 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:19:10.966663 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:19:10.966689 | orchestrator | 2026-01-10 14:19:10.966711 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-01-10 14:19:10.966723 | orchestrator | Saturday 10 January 2026 14:19:03 +0000 (0:00:02.041) 0:08:25.505 ****** 2026-01-10 14:19:10.966733 | orchestrator | changed: [testbed-manager] 2026-01-10 14:19:10.966744 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:19:10.966755 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:19:10.966765 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:19:10.966776 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:19:10.966786 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:19:10.966797 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:19:10.966807 | orchestrator | 2026-01-10 14:19:10.966818 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-01-10 14:19:10.966828 | orchestrator | Saturday 10 January 2026 14:19:04 +0000 (0:00:01.288) 0:08:26.793 ****** 2026-01-10 14:19:10.966848 | orchestrator | changed: [testbed-manager] 2026-01-10 14:19:10.966858 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:19:10.966869 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:19:10.966879 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:19:10.966890 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:19:10.966900 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:19:10.966911 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:19:10.966921 | orchestrator | 2026-01-10 14:19:10.966932 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-01-10 14:19:10.966943 | orchestrator | 2026-01-10 14:19:10.966953 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-01-10 14:19:10.966964 | orchestrator | Saturday 10 January 2026 14:19:05 +0000 (0:00:01.140) 0:08:27.934 ****** 2026-01-10 14:19:10.966975 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:19:10.966985 | orchestrator | 2026-01-10 14:19:10.966996 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-10 14:19:10.967012 | orchestrator | Saturday 10 January 2026 14:19:06 +0000 (0:00:00.834) 0:08:28.769 ****** 2026-01-10 14:19:10.967023 | orchestrator | ok: [testbed-manager] 2026-01-10 14:19:10.967034 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:19:10.967044 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:19:10.967055 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:19:10.967066 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:19:10.967076 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:19:10.967086 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:19:10.967097 | orchestrator | 2026-01-10 14:19:10.967160 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-10 14:19:10.967175 | orchestrator | Saturday 10 January 2026 14:19:07 +0000 (0:00:01.051) 0:08:29.820 ****** 2026-01-10 14:19:10.967185 | orchestrator | changed: [testbed-manager] 2026-01-10 14:19:10.967196 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:19:10.967207 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:19:10.967218 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:19:10.967228 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:19:10.967240 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:19:10.967260 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:19:10.967278 | orchestrator | 2026-01-10 14:19:10.967300 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-01-10 14:19:10.967322 | orchestrator | Saturday 10 January 2026 14:19:09 +0000 (0:00:01.173) 0:08:30.994 ****** 2026-01-10 14:19:10.967342 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:19:10.967354 | orchestrator | 2026-01-10 14:19:10.967364 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-10 14:19:10.967375 | orchestrator | Saturday 10 January 2026 14:19:10 +0000 (0:00:01.038) 0:08:32.032 ****** 2026-01-10 14:19:10.967385 | orchestrator | ok: [testbed-manager] 2026-01-10 14:19:10.967396 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:19:10.967407 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:19:10.967418 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:19:10.967428 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:19:10.967439 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:19:10.967449 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:19:10.967459 | orchestrator | 2026-01-10 14:19:10.967478 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-10 14:19:12.664526 | orchestrator | Saturday 10 January 2026 14:19:10 +0000 (0:00:00.871) 0:08:32.904 ****** 2026-01-10 14:19:12.664627 | orchestrator | changed: [testbed-manager] 2026-01-10 14:19:12.664643 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:19:12.664655 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:19:12.664666 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:19:12.664721 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:19:12.664734 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:19:12.664744 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:19:12.664755 | orchestrator | 2026-01-10 14:19:12.664767 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:19:12.664780 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-01-10 14:19:12.664792 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-10 14:19:12.664803 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-10 14:19:12.664814 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-10 14:19:12.664825 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-01-10 14:19:12.664835 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-10 14:19:12.664846 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-10 14:19:12.664857 | orchestrator | 2026-01-10 14:19:12.664868 | orchestrator | 2026-01-10 14:19:12.664879 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:19:12.664889 | orchestrator | Saturday 10 January 2026 14:19:12 +0000 (0:00:01.160) 0:08:34.064 ****** 2026-01-10 14:19:12.664900 | orchestrator | =============================================================================== 2026-01-10 14:19:12.664911 | orchestrator | osism.commons.packages : Install required packages --------------------- 81.40s 2026-01-10 14:19:12.664921 | orchestrator | osism.commons.packages : Download required packages -------------------- 37.63s 2026-01-10 14:19:12.664932 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 36.65s 2026-01-10 14:19:12.664943 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.66s 2026-01-10 14:19:12.664954 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.89s 2026-01-10 14:19:12.664966 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.73s 2026-01-10 14:19:12.664977 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.78s 2026-01-10 14:19:12.664987 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.78s 2026-01-10 14:19:12.664998 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.86s 2026-01-10 14:19:12.665008 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.70s 2026-01-10 14:19:12.665033 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.50s 2026-01-10 14:19:12.665044 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 9.18s 2026-01-10 14:19:12.665055 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.96s 2026-01-10 14:19:12.665065 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.92s 2026-01-10 14:19:12.665076 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.32s 2026-01-10 14:19:12.665087 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.03s 2026-01-10 14:19:12.665097 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.38s 2026-01-10 14:19:12.665135 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.29s 2026-01-10 14:19:12.665146 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 6.01s 2026-01-10 14:19:12.665165 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.70s 2026-01-10 14:19:12.999132 | orchestrator | + osism apply fail2ban 2026-01-10 14:19:25.992488 | orchestrator | 2026-01-10 14:19:25 | INFO  | Task 474a0f97-0a33-43da-a222-75504d2db196 (fail2ban) was prepared for execution. 2026-01-10 14:19:25.992578 | orchestrator | 2026-01-10 14:19:25 | INFO  | It takes a moment until task 474a0f97-0a33-43da-a222-75504d2db196 (fail2ban) has been started and output is visible here. 2026-01-10 14:19:49.301245 | orchestrator | 2026-01-10 14:19:49.301348 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-01-10 14:19:49.301366 | orchestrator | 2026-01-10 14:19:49.301378 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-01-10 14:19:49.301390 | orchestrator | Saturday 10 January 2026 14:19:30 +0000 (0:00:00.257) 0:00:00.257 ****** 2026-01-10 14:19:49.301402 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:19:49.301416 | orchestrator | 2026-01-10 14:19:49.301427 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-01-10 14:19:49.301438 | orchestrator | Saturday 10 January 2026 14:19:31 +0000 (0:00:01.180) 0:00:01.438 ****** 2026-01-10 14:19:49.301450 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:19:49.301461 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:19:49.301472 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:19:49.301483 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:19:49.301494 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:19:49.301505 | orchestrator | changed: [testbed-manager] 2026-01-10 14:19:49.301516 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:19:49.301527 | orchestrator | 2026-01-10 14:19:49.301537 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-01-10 14:19:49.301548 | orchestrator | Saturday 10 January 2026 14:19:43 +0000 (0:00:12.032) 0:00:13.471 ****** 2026-01-10 14:19:49.301559 | orchestrator | changed: [testbed-manager] 2026-01-10 14:19:49.301570 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:19:49.301609 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:19:49.301621 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:19:49.301632 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:19:49.301642 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:19:49.301653 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:19:49.301664 | orchestrator | 2026-01-10 14:19:49.301675 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-01-10 14:19:49.301686 | orchestrator | Saturday 10 January 2026 14:19:45 +0000 (0:00:01.659) 0:00:15.130 ****** 2026-01-10 14:19:49.301697 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:19:49.301709 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:19:49.301720 | orchestrator | ok: [testbed-manager] 2026-01-10 14:19:49.301731 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:19:49.301742 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:19:49.301753 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:19:49.301764 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:19:49.301774 | orchestrator | 2026-01-10 14:19:49.301785 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-01-10 14:19:49.301796 | orchestrator | Saturday 10 January 2026 14:19:47 +0000 (0:00:01.608) 0:00:16.739 ****** 2026-01-10 14:19:49.301807 | orchestrator | changed: [testbed-manager] 2026-01-10 14:19:49.301818 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:19:49.301829 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:19:49.301840 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:19:49.301851 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:19:49.301861 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:19:49.301872 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:19:49.301911 | orchestrator | 2026-01-10 14:19:49.301923 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:19:49.301934 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:19:49.301945 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:19:49.301956 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:19:49.301967 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:19:49.301978 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:19:49.301989 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:19:49.302000 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:19:49.302010 | orchestrator | 2026-01-10 14:19:49.302094 | orchestrator | 2026-01-10 14:19:49.302106 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:19:49.302117 | orchestrator | Saturday 10 January 2026 14:19:48 +0000 (0:00:01.657) 0:00:18.396 ****** 2026-01-10 14:19:49.302128 | orchestrator | =============================================================================== 2026-01-10 14:19:49.302139 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 12.03s 2026-01-10 14:19:49.302150 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.66s 2026-01-10 14:19:49.302161 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.66s 2026-01-10 14:19:49.302172 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.61s 2026-01-10 14:19:49.302183 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.18s 2026-01-10 14:19:49.778596 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-01-10 14:19:49.778703 | orchestrator | + osism apply network 2026-01-10 14:20:01.886236 | orchestrator | 2026-01-10 14:20:01 | INFO  | Task 53a2828d-057b-4003-acac-1ef6f5a8c331 (network) was prepared for execution. 2026-01-10 14:20:01.886342 | orchestrator | 2026-01-10 14:20:01 | INFO  | It takes a moment until task 53a2828d-057b-4003-acac-1ef6f5a8c331 (network) has been started and output is visible here. 2026-01-10 14:20:31.465786 | orchestrator | 2026-01-10 14:20:31.465879 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-01-10 14:20:31.465888 | orchestrator | 2026-01-10 14:20:31.465895 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-01-10 14:20:31.465902 | orchestrator | Saturday 10 January 2026 14:20:06 +0000 (0:00:00.270) 0:00:00.270 ****** 2026-01-10 14:20:31.465908 | orchestrator | ok: [testbed-manager] 2026-01-10 14:20:31.465916 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:20:31.465922 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:20:31.465929 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:20:31.465935 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:20:31.465941 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:20:31.465948 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:20:31.465954 | orchestrator | 2026-01-10 14:20:31.465960 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-01-10 14:20:31.465966 | orchestrator | Saturday 10 January 2026 14:20:07 +0000 (0:00:00.714) 0:00:00.984 ****** 2026-01-10 14:20:31.465974 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:20:31.466003 | orchestrator | 2026-01-10 14:20:31.466094 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-01-10 14:20:31.466101 | orchestrator | Saturday 10 January 2026 14:20:08 +0000 (0:00:01.195) 0:00:02.179 ****** 2026-01-10 14:20:31.466107 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:20:31.466113 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:20:31.466118 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:20:31.466124 | orchestrator | ok: [testbed-manager] 2026-01-10 14:20:31.466130 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:20:31.466137 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:20:31.466143 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:20:31.466149 | orchestrator | 2026-01-10 14:20:31.466155 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-01-10 14:20:31.466162 | orchestrator | Saturday 10 January 2026 14:20:10 +0000 (0:00:01.903) 0:00:04.083 ****** 2026-01-10 14:20:31.466168 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:20:31.466174 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:20:31.466180 | orchestrator | ok: [testbed-manager] 2026-01-10 14:20:31.466186 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:20:31.466192 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:20:31.466198 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:20:31.466204 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:20:31.466210 | orchestrator | 2026-01-10 14:20:31.466216 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-01-10 14:20:31.466222 | orchestrator | Saturday 10 January 2026 14:20:12 +0000 (0:00:01.939) 0:00:06.022 ****** 2026-01-10 14:20:31.466230 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-01-10 14:20:31.466237 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-01-10 14:20:31.466243 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-01-10 14:20:31.466249 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-01-10 14:20:31.466255 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-01-10 14:20:31.466261 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-01-10 14:20:31.466267 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-01-10 14:20:31.466273 | orchestrator | 2026-01-10 14:20:31.466280 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-01-10 14:20:31.466303 | orchestrator | Saturday 10 January 2026 14:20:13 +0000 (0:00:01.036) 0:00:07.059 ****** 2026-01-10 14:20:31.466309 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-10 14:20:31.466316 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-10 14:20:31.466322 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-10 14:20:31.466328 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-10 14:20:31.466334 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-10 14:20:31.466341 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:20:31.466348 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-10 14:20:31.466354 | orchestrator | 2026-01-10 14:20:31.466364 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-01-10 14:20:31.466371 | orchestrator | Saturday 10 January 2026 14:20:16 +0000 (0:00:03.449) 0:00:10.508 ****** 2026-01-10 14:20:31.466378 | orchestrator | changed: [testbed-manager] 2026-01-10 14:20:31.466385 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:20:31.466391 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:20:31.466397 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:20:31.466404 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:20:31.466410 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:20:31.466417 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:20:31.466423 | orchestrator | 2026-01-10 14:20:31.466429 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-01-10 14:20:31.466436 | orchestrator | Saturday 10 January 2026 14:20:18 +0000 (0:00:01.708) 0:00:12.217 ****** 2026-01-10 14:20:31.466443 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-10 14:20:31.466449 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:20:31.466464 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-10 14:20:31.466470 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-10 14:20:31.466476 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-10 14:20:31.466483 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-10 14:20:31.466489 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-10 14:20:31.466495 | orchestrator | 2026-01-10 14:20:31.466502 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-01-10 14:20:31.466509 | orchestrator | Saturday 10 January 2026 14:20:19 +0000 (0:00:01.688) 0:00:13.905 ****** 2026-01-10 14:20:31.466515 | orchestrator | ok: [testbed-manager] 2026-01-10 14:20:31.466522 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:20:31.466528 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:20:31.466534 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:20:31.466541 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:20:31.466547 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:20:31.466553 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:20:31.466559 | orchestrator | 2026-01-10 14:20:31.466566 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-01-10 14:20:31.466587 | orchestrator | Saturday 10 January 2026 14:20:21 +0000 (0:00:01.205) 0:00:15.110 ****** 2026-01-10 14:20:31.466594 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:20:31.466600 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:20:31.466605 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:20:31.466611 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:20:31.466617 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:20:31.466623 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:20:31.466629 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:20:31.466635 | orchestrator | 2026-01-10 14:20:31.466641 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-01-10 14:20:31.466647 | orchestrator | Saturday 10 January 2026 14:20:21 +0000 (0:00:00.669) 0:00:15.780 ****** 2026-01-10 14:20:31.466653 | orchestrator | ok: [testbed-manager] 2026-01-10 14:20:31.466659 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:20:31.466665 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:20:31.466671 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:20:31.466676 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:20:31.466682 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:20:31.466688 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:20:31.466695 | orchestrator | 2026-01-10 14:20:31.466701 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-01-10 14:20:31.466707 | orchestrator | Saturday 10 January 2026 14:20:24 +0000 (0:00:02.569) 0:00:18.349 ****** 2026-01-10 14:20:31.466713 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:20:31.466719 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:20:31.466725 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:20:31.466731 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:20:31.466737 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:20:31.466742 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:20:31.466749 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-01-10 14:20:31.466757 | orchestrator | 2026-01-10 14:20:31.466763 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-01-10 14:20:31.466769 | orchestrator | Saturday 10 January 2026 14:20:25 +0000 (0:00:00.980) 0:00:19.330 ****** 2026-01-10 14:20:31.466775 | orchestrator | ok: [testbed-manager] 2026-01-10 14:20:31.466781 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:20:31.466787 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:20:31.466792 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:20:31.466798 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:20:31.466804 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:20:31.466809 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:20:31.466816 | orchestrator | 2026-01-10 14:20:31.466822 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-01-10 14:20:31.466833 | orchestrator | Saturday 10 January 2026 14:20:27 +0000 (0:00:01.730) 0:00:21.061 ****** 2026-01-10 14:20:31.466840 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:20:31.466849 | orchestrator | 2026-01-10 14:20:31.466855 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-10 14:20:31.466861 | orchestrator | Saturday 10 January 2026 14:20:28 +0000 (0:00:01.276) 0:00:22.337 ****** 2026-01-10 14:20:31.466866 | orchestrator | ok: [testbed-manager] 2026-01-10 14:20:31.466872 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:20:31.466878 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:20:31.466884 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:20:31.466889 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:20:31.466895 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:20:31.466901 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:20:31.466907 | orchestrator | 2026-01-10 14:20:31.466913 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-01-10 14:20:31.466920 | orchestrator | Saturday 10 January 2026 14:20:29 +0000 (0:00:00.978) 0:00:23.315 ****** 2026-01-10 14:20:31.466926 | orchestrator | ok: [testbed-manager] 2026-01-10 14:20:31.466931 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:20:31.466937 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:20:31.466943 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:20:31.466949 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:20:31.466959 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:20:31.466965 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:20:31.466971 | orchestrator | 2026-01-10 14:20:31.466977 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-10 14:20:31.466983 | orchestrator | Saturday 10 January 2026 14:20:30 +0000 (0:00:00.860) 0:00:24.175 ****** 2026-01-10 14:20:31.466989 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-01-10 14:20:31.466996 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-01-10 14:20:31.467002 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-01-10 14:20:31.467024 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-01-10 14:20:31.467030 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-10 14:20:31.467035 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-01-10 14:20:31.467041 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-10 14:20:31.467047 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-01-10 14:20:31.467053 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-10 14:20:31.467059 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-01-10 14:20:31.467064 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-10 14:20:31.467071 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-10 14:20:31.467076 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-10 14:20:31.467082 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-10 14:20:31.467088 | orchestrator | 2026-01-10 14:20:31.467099 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-01-10 14:20:49.101270 | orchestrator | Saturday 10 January 2026 14:20:31 +0000 (0:00:01.246) 0:00:25.422 ****** 2026-01-10 14:20:49.101408 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:20:49.101425 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:20:49.101436 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:20:49.101447 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:20:49.101458 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:20:49.101590 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:20:49.101606 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:20:49.101617 | orchestrator | 2026-01-10 14:20:49.101629 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-01-10 14:20:49.101640 | orchestrator | Saturday 10 January 2026 14:20:32 +0000 (0:00:00.653) 0:00:26.075 ****** 2026-01-10 14:20:49.101652 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-2, testbed-node-1, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:20:49.101666 | orchestrator | 2026-01-10 14:20:49.101677 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-01-10 14:20:49.101688 | orchestrator | Saturday 10 January 2026 14:20:36 +0000 (0:00:04.693) 0:00:30.769 ****** 2026-01-10 14:20:49.101700 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:20:49.101715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:20:49.101726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:20:49.101738 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:20:49.101749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:20:49.101760 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:20:49.101785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:20:49.101797 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:20:49.101808 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:20:49.101819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:20:49.101837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:20:49.101874 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:20:49.101887 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:20:49.101899 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:20:49.101910 | orchestrator | 2026-01-10 14:20:49.101921 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-01-10 14:20:49.101932 | orchestrator | Saturday 10 January 2026 14:20:42 +0000 (0:00:06.078) 0:00:36.847 ****** 2026-01-10 14:20:49.101943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:20:49.101955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:20:49.101966 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:20:49.101977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:20:49.102062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:20:49.102078 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:20:49.102089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:20:49.102106 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:20:49.102118 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:20:49.102129 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:20:49.102148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:20:49.102159 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:20:49.102182 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:20:56.550859 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:20:56.550964 | orchestrator | 2026-01-10 14:20:56.551029 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-01-10 14:20:56.551047 | orchestrator | Saturday 10 January 2026 14:20:49 +0000 (0:00:06.209) 0:00:43.057 ****** 2026-01-10 14:20:56.551060 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:20:56.551072 | orchestrator | 2026-01-10 14:20:56.551083 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-10 14:20:56.551095 | orchestrator | Saturday 10 January 2026 14:20:50 +0000 (0:00:01.409) 0:00:44.466 ****** 2026-01-10 14:20:56.551106 | orchestrator | ok: [testbed-manager] 2026-01-10 14:20:56.551117 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:20:56.551129 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:20:56.551139 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:20:56.551150 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:20:56.551160 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:20:56.551171 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:20:56.551182 | orchestrator | 2026-01-10 14:20:56.551192 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-10 14:20:56.551203 | orchestrator | Saturday 10 January 2026 14:20:51 +0000 (0:00:01.333) 0:00:45.800 ****** 2026-01-10 14:20:56.551214 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-10 14:20:56.551226 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-10 14:20:56.551236 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-10 14:20:56.551247 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-10 14:20:56.551258 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:20:56.551270 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-10 14:20:56.551281 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-10 14:20:56.551292 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-10 14:20:56.551302 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-10 14:20:56.551313 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-10 14:20:56.551324 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-10 14:20:56.551334 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-10 14:20:56.551372 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-10 14:20:56.551384 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:20:56.551396 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-10 14:20:56.551409 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-10 14:20:56.551437 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-10 14:20:56.551450 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:20:56.551462 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-10 14:20:56.551474 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-10 14:20:56.551486 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-10 14:20:56.551498 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-10 14:20:56.551510 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:20:56.551522 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-10 14:20:56.551535 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-10 14:20:56.551547 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-10 14:20:56.551559 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-10 14:20:56.551571 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-10 14:20:56.551582 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:20:56.551595 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:20:56.551608 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-10 14:20:56.551620 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-10 14:20:56.551632 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-10 14:20:56.551645 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-10 14:20:56.551657 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:20:56.551669 | orchestrator | 2026-01-10 14:20:56.551681 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-01-10 14:20:56.551711 | orchestrator | Saturday 10 January 2026 14:20:54 +0000 (0:00:02.459) 0:00:48.259 ****** 2026-01-10 14:20:56.551725 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:20:56.551737 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:20:56.551748 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:20:56.551759 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:20:56.551769 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:20:56.551780 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:20:56.551790 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:20:56.551801 | orchestrator | 2026-01-10 14:20:56.551812 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-01-10 14:20:56.551823 | orchestrator | Saturday 10 January 2026 14:20:54 +0000 (0:00:00.694) 0:00:48.954 ****** 2026-01-10 14:20:56.551834 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:20:56.551845 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:20:56.551855 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:20:56.551866 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:20:56.551876 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:20:56.551887 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:20:56.551898 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:20:56.551908 | orchestrator | 2026-01-10 14:20:56.551919 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:20:56.551931 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-10 14:20:56.551952 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-10 14:20:56.551963 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-10 14:20:56.551974 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-10 14:20:56.552013 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-10 14:20:56.552025 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-10 14:20:56.552036 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-10 14:20:56.552046 | orchestrator | 2026-01-10 14:20:56.552057 | orchestrator | 2026-01-10 14:20:56.552068 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:20:56.552078 | orchestrator | Saturday 10 January 2026 14:20:55 +0000 (0:00:00.923) 0:00:49.877 ****** 2026-01-10 14:20:56.552089 | orchestrator | =============================================================================== 2026-01-10 14:20:56.552100 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.21s 2026-01-10 14:20:56.552111 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.08s 2026-01-10 14:20:56.552122 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.69s 2026-01-10 14:20:56.552132 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.45s 2026-01-10 14:20:56.552149 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.57s 2026-01-10 14:20:56.552160 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.46s 2026-01-10 14:20:56.552170 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.94s 2026-01-10 14:20:56.552181 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.90s 2026-01-10 14:20:56.552191 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.73s 2026-01-10 14:20:56.552202 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.71s 2026-01-10 14:20:56.552213 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.69s 2026-01-10 14:20:56.552223 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.41s 2026-01-10 14:20:56.552234 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.33s 2026-01-10 14:20:56.552244 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.28s 2026-01-10 14:20:56.552255 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.25s 2026-01-10 14:20:56.552266 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.21s 2026-01-10 14:20:56.552276 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.20s 2026-01-10 14:20:56.552287 | orchestrator | osism.commons.network : Create required directories --------------------- 1.04s 2026-01-10 14:20:56.552297 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.98s 2026-01-10 14:20:56.552308 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.98s 2026-01-10 14:20:57.010310 | orchestrator | + osism apply wireguard 2026-01-10 14:21:09.085458 | orchestrator | 2026-01-10 14:21:09 | INFO  | Task 3db122e9-da05-4428-8499-cf21e781ff1a (wireguard) was prepared for execution. 2026-01-10 14:21:09.085566 | orchestrator | 2026-01-10 14:21:09 | INFO  | It takes a moment until task 3db122e9-da05-4428-8499-cf21e781ff1a (wireguard) has been started and output is visible here. 2026-01-10 14:21:29.918366 | orchestrator | 2026-01-10 14:21:29.918451 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-01-10 14:21:29.918463 | orchestrator | 2026-01-10 14:21:29.918471 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-01-10 14:21:29.918480 | orchestrator | Saturday 10 January 2026 14:21:13 +0000 (0:00:00.229) 0:00:00.229 ****** 2026-01-10 14:21:29.918489 | orchestrator | ok: [testbed-manager] 2026-01-10 14:21:29.918498 | orchestrator | 2026-01-10 14:21:29.918509 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-01-10 14:21:29.918517 | orchestrator | Saturday 10 January 2026 14:21:15 +0000 (0:00:01.611) 0:00:01.840 ****** 2026-01-10 14:21:29.918524 | orchestrator | changed: [testbed-manager] 2026-01-10 14:21:29.918533 | orchestrator | 2026-01-10 14:21:29.918541 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-01-10 14:21:29.918549 | orchestrator | Saturday 10 January 2026 14:21:22 +0000 (0:00:06.960) 0:00:08.801 ****** 2026-01-10 14:21:29.918556 | orchestrator | changed: [testbed-manager] 2026-01-10 14:21:29.918564 | orchestrator | 2026-01-10 14:21:29.918571 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-01-10 14:21:29.918579 | orchestrator | Saturday 10 January 2026 14:21:22 +0000 (0:00:00.565) 0:00:09.367 ****** 2026-01-10 14:21:29.918587 | orchestrator | changed: [testbed-manager] 2026-01-10 14:21:29.918594 | orchestrator | 2026-01-10 14:21:29.918602 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-01-10 14:21:29.918610 | orchestrator | Saturday 10 January 2026 14:21:23 +0000 (0:00:00.460) 0:00:09.827 ****** 2026-01-10 14:21:29.918617 | orchestrator | ok: [testbed-manager] 2026-01-10 14:21:29.918625 | orchestrator | 2026-01-10 14:21:29.918633 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-01-10 14:21:29.918640 | orchestrator | Saturday 10 January 2026 14:21:23 +0000 (0:00:00.689) 0:00:10.516 ****** 2026-01-10 14:21:29.918648 | orchestrator | ok: [testbed-manager] 2026-01-10 14:21:29.918656 | orchestrator | 2026-01-10 14:21:29.918663 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-01-10 14:21:29.918671 | orchestrator | Saturday 10 January 2026 14:21:24 +0000 (0:00:00.454) 0:00:10.971 ****** 2026-01-10 14:21:29.918678 | orchestrator | ok: [testbed-manager] 2026-01-10 14:21:29.918686 | orchestrator | 2026-01-10 14:21:29.918694 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-01-10 14:21:29.918701 | orchestrator | Saturday 10 January 2026 14:21:24 +0000 (0:00:00.448) 0:00:11.420 ****** 2026-01-10 14:21:29.918709 | orchestrator | changed: [testbed-manager] 2026-01-10 14:21:29.918717 | orchestrator | 2026-01-10 14:21:29.918746 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-01-10 14:21:29.918754 | orchestrator | Saturday 10 January 2026 14:21:25 +0000 (0:00:01.133) 0:00:12.553 ****** 2026-01-10 14:21:29.918762 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-10 14:21:29.918769 | orchestrator | changed: [testbed-manager] 2026-01-10 14:21:29.918777 | orchestrator | 2026-01-10 14:21:29.918785 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-01-10 14:21:29.918793 | orchestrator | Saturday 10 January 2026 14:21:26 +0000 (0:00:00.987) 0:00:13.541 ****** 2026-01-10 14:21:29.918800 | orchestrator | changed: [testbed-manager] 2026-01-10 14:21:29.918808 | orchestrator | 2026-01-10 14:21:29.918816 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-01-10 14:21:29.918823 | orchestrator | Saturday 10 January 2026 14:21:28 +0000 (0:00:01.760) 0:00:15.301 ****** 2026-01-10 14:21:29.918831 | orchestrator | changed: [testbed-manager] 2026-01-10 14:21:29.918839 | orchestrator | 2026-01-10 14:21:29.918846 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:21:29.918855 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:21:29.918888 | orchestrator | 2026-01-10 14:21:29.918896 | orchestrator | 2026-01-10 14:21:29.918904 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:21:29.918912 | orchestrator | Saturday 10 January 2026 14:21:29 +0000 (0:00:00.950) 0:00:16.252 ****** 2026-01-10 14:21:29.918919 | orchestrator | =============================================================================== 2026-01-10 14:21:29.918927 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.96s 2026-01-10 14:21:29.918935 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.76s 2026-01-10 14:21:29.918943 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.61s 2026-01-10 14:21:29.918980 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.13s 2026-01-10 14:21:29.918989 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.99s 2026-01-10 14:21:29.918997 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.95s 2026-01-10 14:21:29.919006 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.69s 2026-01-10 14:21:29.919014 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.57s 2026-01-10 14:21:29.919022 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.46s 2026-01-10 14:21:29.919030 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.46s 2026-01-10 14:21:29.919038 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.45s 2026-01-10 14:21:30.252808 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-01-10 14:21:30.294619 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-01-10 14:21:30.294701 | orchestrator | Dload Upload Total Spent Left Speed 2026-01-10 14:21:30.369344 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 200 0 --:--:-- --:--:-- --:--:-- 202 2026-01-10 14:21:30.384680 | orchestrator | + osism apply --environment custom workarounds 2026-01-10 14:21:32.340755 | orchestrator | 2026-01-10 14:21:32 | INFO  | Trying to run play workarounds in environment custom 2026-01-10 14:21:42.440795 | orchestrator | 2026-01-10 14:21:42 | INFO  | Task b2c9f4d5-6e41-47d1-adba-7471b1739525 (workarounds) was prepared for execution. 2026-01-10 14:21:42.440885 | orchestrator | 2026-01-10 14:21:42 | INFO  | It takes a moment until task b2c9f4d5-6e41-47d1-adba-7471b1739525 (workarounds) has been started and output is visible here. 2026-01-10 14:22:08.676414 | orchestrator | 2026-01-10 14:22:08.676525 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:22:08.676541 | orchestrator | 2026-01-10 14:22:08.676551 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-01-10 14:22:08.676562 | orchestrator | Saturday 10 January 2026 14:21:46 +0000 (0:00:00.133) 0:00:00.133 ****** 2026-01-10 14:22:08.676573 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-01-10 14:22:08.676583 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-01-10 14:22:08.676593 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-01-10 14:22:08.676603 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-01-10 14:22:08.676612 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-01-10 14:22:08.676622 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-01-10 14:22:08.676631 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-01-10 14:22:08.676641 | orchestrator | 2026-01-10 14:22:08.676650 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-01-10 14:22:08.676659 | orchestrator | 2026-01-10 14:22:08.676669 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-10 14:22:08.676701 | orchestrator | Saturday 10 January 2026 14:21:47 +0000 (0:00:00.822) 0:00:00.956 ****** 2026-01-10 14:22:08.676711 | orchestrator | ok: [testbed-manager] 2026-01-10 14:22:08.676722 | orchestrator | 2026-01-10 14:22:08.676732 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-01-10 14:22:08.676741 | orchestrator | 2026-01-10 14:22:08.676751 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-10 14:22:08.676761 | orchestrator | Saturday 10 January 2026 14:21:49 +0000 (0:00:02.456) 0:00:03.412 ****** 2026-01-10 14:22:08.676770 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:22:08.676780 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:22:08.676789 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:22:08.676798 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:22:08.676808 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:22:08.676817 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:22:08.676826 | orchestrator | 2026-01-10 14:22:08.676836 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-01-10 14:22:08.676845 | orchestrator | 2026-01-10 14:22:08.676855 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-01-10 14:22:08.676865 | orchestrator | Saturday 10 January 2026 14:21:51 +0000 (0:00:01.918) 0:00:05.331 ****** 2026-01-10 14:22:08.676875 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-10 14:22:08.676885 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-10 14:22:08.676895 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-10 14:22:08.676957 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-10 14:22:08.676972 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-10 14:22:08.676984 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-10 14:22:08.676995 | orchestrator | 2026-01-10 14:22:08.677006 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-01-10 14:22:08.677018 | orchestrator | Saturday 10 January 2026 14:21:53 +0000 (0:00:01.544) 0:00:06.875 ****** 2026-01-10 14:22:08.677029 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:22:08.677040 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:22:08.677052 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:22:08.677063 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:22:08.677074 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:22:08.677085 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:22:08.677095 | orchestrator | 2026-01-10 14:22:08.677107 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-01-10 14:22:08.677118 | orchestrator | Saturday 10 January 2026 14:21:57 +0000 (0:00:03.988) 0:00:10.864 ****** 2026-01-10 14:22:08.677129 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:22:08.677140 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:22:08.677151 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:22:08.677160 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:22:08.677170 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:22:08.677179 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:22:08.677189 | orchestrator | 2026-01-10 14:22:08.677198 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-01-10 14:22:08.677208 | orchestrator | 2026-01-10 14:22:08.677217 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-01-10 14:22:08.677227 | orchestrator | Saturday 10 January 2026 14:21:58 +0000 (0:00:00.787) 0:00:11.651 ****** 2026-01-10 14:22:08.677237 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:22:08.677246 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:22:08.677256 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:22:08.677273 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:22:08.677283 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:22:08.677292 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:22:08.677302 | orchestrator | changed: [testbed-manager] 2026-01-10 14:22:08.677311 | orchestrator | 2026-01-10 14:22:08.677321 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-01-10 14:22:08.677330 | orchestrator | Saturday 10 January 2026 14:21:59 +0000 (0:00:01.590) 0:00:13.242 ****** 2026-01-10 14:22:08.677340 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:22:08.677349 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:22:08.677359 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:22:08.677368 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:22:08.677378 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:22:08.677387 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:22:08.677412 | orchestrator | changed: [testbed-manager] 2026-01-10 14:22:08.677422 | orchestrator | 2026-01-10 14:22:08.677432 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-01-10 14:22:08.677441 | orchestrator | Saturday 10 January 2026 14:22:01 +0000 (0:00:01.585) 0:00:14.827 ****** 2026-01-10 14:22:08.677451 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:22:08.677461 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:22:08.677470 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:22:08.677480 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:22:08.677489 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:22:08.677498 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:22:08.677508 | orchestrator | ok: [testbed-manager] 2026-01-10 14:22:08.677517 | orchestrator | 2026-01-10 14:22:08.677527 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-01-10 14:22:08.677537 | orchestrator | Saturday 10 January 2026 14:22:02 +0000 (0:00:01.568) 0:00:16.396 ****** 2026-01-10 14:22:08.677546 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:22:08.677556 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:22:08.677565 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:22:08.677574 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:22:08.677584 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:22:08.677593 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:22:08.677603 | orchestrator | changed: [testbed-manager] 2026-01-10 14:22:08.677612 | orchestrator | 2026-01-10 14:22:08.677622 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-01-10 14:22:08.677631 | orchestrator | Saturday 10 January 2026 14:22:04 +0000 (0:00:01.887) 0:00:18.283 ****** 2026-01-10 14:22:08.677641 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:22:08.677650 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:22:08.677660 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:22:08.677669 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:22:08.677679 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:22:08.677688 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:22:08.677697 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:22:08.677707 | orchestrator | 2026-01-10 14:22:08.677717 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-01-10 14:22:08.677726 | orchestrator | 2026-01-10 14:22:08.677736 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-01-10 14:22:08.677745 | orchestrator | Saturday 10 January 2026 14:22:05 +0000 (0:00:00.657) 0:00:18.941 ****** 2026-01-10 14:22:08.677755 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:22:08.677764 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:22:08.677774 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:22:08.677783 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:22:08.677792 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:22:08.677802 | orchestrator | ok: [testbed-manager] 2026-01-10 14:22:08.677811 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:22:08.677821 | orchestrator | 2026-01-10 14:22:08.677830 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:22:08.677841 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:22:08.677863 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:22:08.677873 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:22:08.677883 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:22:08.677892 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:22:08.677902 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:22:08.677911 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:22:08.677942 | orchestrator | 2026-01-10 14:22:08.677952 | orchestrator | 2026-01-10 14:22:08.677961 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:22:08.677971 | orchestrator | Saturday 10 January 2026 14:22:08 +0000 (0:00:03.219) 0:00:22.161 ****** 2026-01-10 14:22:08.677981 | orchestrator | =============================================================================== 2026-01-10 14:22:08.677990 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.99s 2026-01-10 14:22:08.678000 | orchestrator | Install python3-docker -------------------------------------------------- 3.22s 2026-01-10 14:22:08.678009 | orchestrator | Apply netplan configuration --------------------------------------------- 2.46s 2026-01-10 14:22:08.678070 | orchestrator | Apply netplan configuration --------------------------------------------- 1.92s 2026-01-10 14:22:08.678080 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.89s 2026-01-10 14:22:08.678090 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.59s 2026-01-10 14:22:08.678099 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.59s 2026-01-10 14:22:08.678109 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.57s 2026-01-10 14:22:08.678118 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.54s 2026-01-10 14:22:08.678128 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.82s 2026-01-10 14:22:08.678137 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.79s 2026-01-10 14:22:08.678154 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.66s 2026-01-10 14:22:09.360975 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-01-10 14:22:21.483651 | orchestrator | 2026-01-10 14:22:21 | INFO  | Task 0c67f66d-fe6b-43f1-be1f-68ff51474d91 (reboot) was prepared for execution. 2026-01-10 14:22:21.483791 | orchestrator | 2026-01-10 14:22:21 | INFO  | It takes a moment until task 0c67f66d-fe6b-43f1-be1f-68ff51474d91 (reboot) has been started and output is visible here. 2026-01-10 14:22:31.808853 | orchestrator | 2026-01-10 14:22:31.809010 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-10 14:22:31.809021 | orchestrator | 2026-01-10 14:22:31.809029 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-10 14:22:31.809037 | orchestrator | Saturday 10 January 2026 14:22:25 +0000 (0:00:00.205) 0:00:00.205 ****** 2026-01-10 14:22:31.809046 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:22:31.809055 | orchestrator | 2026-01-10 14:22:31.809063 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-10 14:22:31.809071 | orchestrator | Saturday 10 January 2026 14:22:25 +0000 (0:00:00.108) 0:00:00.314 ****** 2026-01-10 14:22:31.809101 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:22:31.809108 | orchestrator | 2026-01-10 14:22:31.809115 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-10 14:22:31.809122 | orchestrator | Saturday 10 January 2026 14:22:26 +0000 (0:00:00.900) 0:00:01.214 ****** 2026-01-10 14:22:31.809129 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:22:31.809136 | orchestrator | 2026-01-10 14:22:31.809142 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-10 14:22:31.809149 | orchestrator | 2026-01-10 14:22:31.809156 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-10 14:22:31.809163 | orchestrator | Saturday 10 January 2026 14:22:26 +0000 (0:00:00.157) 0:00:01.372 ****** 2026-01-10 14:22:31.809170 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:22:31.809177 | orchestrator | 2026-01-10 14:22:31.809183 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-10 14:22:31.809190 | orchestrator | Saturday 10 January 2026 14:22:26 +0000 (0:00:00.100) 0:00:01.473 ****** 2026-01-10 14:22:31.809197 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:22:31.809203 | orchestrator | 2026-01-10 14:22:31.809210 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-10 14:22:31.809217 | orchestrator | Saturday 10 January 2026 14:22:27 +0000 (0:00:00.686) 0:00:02.160 ****** 2026-01-10 14:22:31.809224 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:22:31.809230 | orchestrator | 2026-01-10 14:22:31.809237 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-10 14:22:31.809244 | orchestrator | 2026-01-10 14:22:31.809250 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-10 14:22:31.809257 | orchestrator | Saturday 10 January 2026 14:22:27 +0000 (0:00:00.117) 0:00:02.277 ****** 2026-01-10 14:22:31.809264 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:22:31.809270 | orchestrator | 2026-01-10 14:22:31.809290 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-10 14:22:31.809297 | orchestrator | Saturday 10 January 2026 14:22:27 +0000 (0:00:00.214) 0:00:02.492 ****** 2026-01-10 14:22:31.809304 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:22:31.809311 | orchestrator | 2026-01-10 14:22:31.809318 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-10 14:22:31.809325 | orchestrator | Saturday 10 January 2026 14:22:28 +0000 (0:00:00.698) 0:00:03.190 ****** 2026-01-10 14:22:31.809332 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:22:31.809339 | orchestrator | 2026-01-10 14:22:31.809346 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-10 14:22:31.809353 | orchestrator | 2026-01-10 14:22:31.809360 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-10 14:22:31.809367 | orchestrator | Saturday 10 January 2026 14:22:28 +0000 (0:00:00.120) 0:00:03.310 ****** 2026-01-10 14:22:31.809373 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:22:31.809380 | orchestrator | 2026-01-10 14:22:31.809387 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-10 14:22:31.809395 | orchestrator | Saturday 10 January 2026 14:22:28 +0000 (0:00:00.108) 0:00:03.419 ****** 2026-01-10 14:22:31.809402 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:22:31.809409 | orchestrator | 2026-01-10 14:22:31.809416 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-10 14:22:31.809423 | orchestrator | Saturday 10 January 2026 14:22:29 +0000 (0:00:00.699) 0:00:04.118 ****** 2026-01-10 14:22:31.809431 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:22:31.809438 | orchestrator | 2026-01-10 14:22:31.809446 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-10 14:22:31.809453 | orchestrator | 2026-01-10 14:22:31.809460 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-10 14:22:31.809468 | orchestrator | Saturday 10 January 2026 14:22:29 +0000 (0:00:00.121) 0:00:04.240 ****** 2026-01-10 14:22:31.809482 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:22:31.809490 | orchestrator | 2026-01-10 14:22:31.809497 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-10 14:22:31.809504 | orchestrator | Saturday 10 January 2026 14:22:29 +0000 (0:00:00.121) 0:00:04.362 ****** 2026-01-10 14:22:31.809511 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:22:31.809518 | orchestrator | 2026-01-10 14:22:31.809526 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-10 14:22:31.809533 | orchestrator | Saturday 10 January 2026 14:22:30 +0000 (0:00:00.685) 0:00:05.047 ****** 2026-01-10 14:22:31.809540 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:22:31.809547 | orchestrator | 2026-01-10 14:22:31.809554 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-10 14:22:31.809561 | orchestrator | 2026-01-10 14:22:31.809568 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-10 14:22:31.809575 | orchestrator | Saturday 10 January 2026 14:22:30 +0000 (0:00:00.103) 0:00:05.151 ****** 2026-01-10 14:22:31.809582 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:22:31.809589 | orchestrator | 2026-01-10 14:22:31.809596 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-10 14:22:31.809603 | orchestrator | Saturday 10 January 2026 14:22:30 +0000 (0:00:00.113) 0:00:05.264 ****** 2026-01-10 14:22:31.809610 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:22:31.809616 | orchestrator | 2026-01-10 14:22:31.809624 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-10 14:22:31.809631 | orchestrator | Saturday 10 January 2026 14:22:31 +0000 (0:00:00.711) 0:00:05.975 ****** 2026-01-10 14:22:31.809652 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:22:31.809659 | orchestrator | 2026-01-10 14:22:31.809666 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:22:31.809675 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:22:31.809683 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:22:31.809690 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:22:31.809697 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:22:31.809704 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:22:31.809711 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:22:31.809718 | orchestrator | 2026-01-10 14:22:31.809725 | orchestrator | 2026-01-10 14:22:31.809732 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:22:31.809738 | orchestrator | Saturday 10 January 2026 14:22:31 +0000 (0:00:00.036) 0:00:06.012 ****** 2026-01-10 14:22:31.809745 | orchestrator | =============================================================================== 2026-01-10 14:22:31.809752 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.38s 2026-01-10 14:22:31.809759 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.77s 2026-01-10 14:22:31.809766 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.66s 2026-01-10 14:22:32.123864 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-01-10 14:22:44.159137 | orchestrator | 2026-01-10 14:22:44 | INFO  | Task 034978d5-3be4-4bfc-9b70-12862bfb792c (wait-for-connection) was prepared for execution. 2026-01-10 14:22:44.159226 | orchestrator | 2026-01-10 14:22:44 | INFO  | It takes a moment until task 034978d5-3be4-4bfc-9b70-12862bfb792c (wait-for-connection) has been started and output is visible here. 2026-01-10 14:23:00.372531 | orchestrator | 2026-01-10 14:23:00.372643 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-01-10 14:23:00.372659 | orchestrator | 2026-01-10 14:23:00.372671 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-01-10 14:23:00.372683 | orchestrator | Saturday 10 January 2026 14:22:48 +0000 (0:00:00.241) 0:00:00.241 ****** 2026-01-10 14:23:00.372694 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:23:00.372706 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:23:00.372717 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:23:00.372728 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:23:00.372739 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:23:00.372750 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:23:00.372761 | orchestrator | 2026-01-10 14:23:00.372772 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:23:00.372784 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:23:00.372796 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:23:00.372807 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:23:00.372819 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:23:00.372830 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:23:00.372841 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:23:00.372852 | orchestrator | 2026-01-10 14:23:00.372863 | orchestrator | 2026-01-10 14:23:00.372967 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:23:00.372982 | orchestrator | Saturday 10 January 2026 14:23:00 +0000 (0:00:11.627) 0:00:11.868 ****** 2026-01-10 14:23:00.372994 | orchestrator | =============================================================================== 2026-01-10 14:23:00.373004 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.63s 2026-01-10 14:23:00.722644 | orchestrator | + osism apply hddtemp 2026-01-10 14:23:12.797499 | orchestrator | 2026-01-10 14:23:12 | INFO  | Task 668cbf17-1e6e-424f-a322-3738ff5e0b2f (hddtemp) was prepared for execution. 2026-01-10 14:23:12.797626 | orchestrator | 2026-01-10 14:23:12 | INFO  | It takes a moment until task 668cbf17-1e6e-424f-a322-3738ff5e0b2f (hddtemp) has been started and output is visible here. 2026-01-10 14:23:42.514477 | orchestrator | 2026-01-10 14:23:42.514595 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-01-10 14:23:42.514616 | orchestrator | 2026-01-10 14:23:42.514628 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-01-10 14:23:42.514642 | orchestrator | Saturday 10 January 2026 14:23:17 +0000 (0:00:00.251) 0:00:00.251 ****** 2026-01-10 14:23:42.514654 | orchestrator | ok: [testbed-manager] 2026-01-10 14:23:42.514668 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:23:42.514680 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:23:42.514693 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:23:42.514705 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:23:42.514718 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:23:42.514729 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:23:42.514740 | orchestrator | 2026-01-10 14:23:42.514753 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-01-10 14:23:42.514766 | orchestrator | Saturday 10 January 2026 14:23:17 +0000 (0:00:00.730) 0:00:00.982 ****** 2026-01-10 14:23:42.514812 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:23:42.514829 | orchestrator | 2026-01-10 14:23:42.514911 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-01-10 14:23:42.514928 | orchestrator | Saturday 10 January 2026 14:23:19 +0000 (0:00:01.165) 0:00:02.147 ****** 2026-01-10 14:23:42.514940 | orchestrator | ok: [testbed-manager] 2026-01-10 14:23:42.514953 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:23:42.514965 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:23:42.514977 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:23:42.514990 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:23:42.515002 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:23:42.515015 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:23:42.515028 | orchestrator | 2026-01-10 14:23:42.515042 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-01-10 14:23:42.515054 | orchestrator | Saturday 10 January 2026 14:23:21 +0000 (0:00:02.235) 0:00:04.382 ****** 2026-01-10 14:23:42.515068 | orchestrator | changed: [testbed-manager] 2026-01-10 14:23:42.515082 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:23:42.515094 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:23:42.515106 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:23:42.515118 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:23:42.515129 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:23:42.515141 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:23:42.515153 | orchestrator | 2026-01-10 14:23:42.515164 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-01-10 14:23:42.515193 | orchestrator | Saturday 10 January 2026 14:23:22 +0000 (0:00:01.215) 0:00:05.598 ****** 2026-01-10 14:23:42.515206 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:23:42.515217 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:23:42.515228 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:23:42.515243 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:23:42.515255 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:23:42.515267 | orchestrator | ok: [testbed-manager] 2026-01-10 14:23:42.515278 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:23:42.515291 | orchestrator | 2026-01-10 14:23:42.515303 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-01-10 14:23:42.515315 | orchestrator | Saturday 10 January 2026 14:23:23 +0000 (0:00:01.165) 0:00:06.763 ****** 2026-01-10 14:23:42.515327 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:23:42.515338 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:23:42.515350 | orchestrator | changed: [testbed-manager] 2026-01-10 14:23:42.515362 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:23:42.515374 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:23:42.515386 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:23:42.515396 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:23:42.515406 | orchestrator | 2026-01-10 14:23:42.515418 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-01-10 14:23:42.515430 | orchestrator | Saturday 10 January 2026 14:23:24 +0000 (0:00:00.839) 0:00:07.603 ****** 2026-01-10 14:23:42.515440 | orchestrator | changed: [testbed-manager] 2026-01-10 14:23:42.515452 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:23:42.515464 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:23:42.515476 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:23:42.515488 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:23:42.515499 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:23:42.515510 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:23:42.515522 | orchestrator | 2026-01-10 14:23:42.515533 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-01-10 14:23:42.515544 | orchestrator | Saturday 10 January 2026 14:23:38 +0000 (0:00:14.447) 0:00:22.051 ****** 2026-01-10 14:23:42.515558 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:23:42.515584 | orchestrator | 2026-01-10 14:23:42.515596 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-01-10 14:23:42.515609 | orchestrator | Saturday 10 January 2026 14:23:40 +0000 (0:00:01.212) 0:00:23.263 ****** 2026-01-10 14:23:42.515620 | orchestrator | changed: [testbed-manager] 2026-01-10 14:23:42.515632 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:23:42.515643 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:23:42.515654 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:23:42.515666 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:23:42.515678 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:23:42.515690 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:23:42.515702 | orchestrator | 2026-01-10 14:23:42.515713 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:23:42.515725 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:23:42.515762 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:23:42.515776 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:23:42.515788 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:23:42.515799 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:23:42.515810 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:23:42.515822 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:23:42.515833 | orchestrator | 2026-01-10 14:23:42.515869 | orchestrator | 2026-01-10 14:23:42.515881 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:23:42.515893 | orchestrator | Saturday 10 January 2026 14:23:42 +0000 (0:00:01.967) 0:00:25.230 ****** 2026-01-10 14:23:42.515904 | orchestrator | =============================================================================== 2026-01-10 14:23:42.515916 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 14.45s 2026-01-10 14:23:42.515928 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.24s 2026-01-10 14:23:42.515940 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.97s 2026-01-10 14:23:42.515952 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.22s 2026-01-10 14:23:42.515964 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.21s 2026-01-10 14:23:42.515975 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.17s 2026-01-10 14:23:42.515986 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.17s 2026-01-10 14:23:42.515998 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.84s 2026-01-10 14:23:42.516019 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.73s 2026-01-10 14:23:42.837182 | orchestrator | ++ semver 9.5.0 7.1.1 2026-01-10 14:23:42.895522 | orchestrator | + [[ 1 -ge 0 ]] 2026-01-10 14:23:42.895623 | orchestrator | + sudo systemctl restart manager.service 2026-01-10 14:23:57.314413 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-10 14:23:57.314504 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-10 14:23:57.314535 | orchestrator | + local max_attempts=60 2026-01-10 14:23:57.314543 | orchestrator | + local name=ceph-ansible 2026-01-10 14:23:57.314550 | orchestrator | + local attempt_num=1 2026-01-10 14:23:57.314557 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:23:57.351290 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-10 14:23:57.351398 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-10 14:23:57.351417 | orchestrator | + sleep 5 2026-01-10 14:24:02.354491 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:24:02.389175 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-10 14:24:02.389279 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-10 14:24:02.389292 | orchestrator | + sleep 5 2026-01-10 14:24:07.393001 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:24:07.425306 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-10 14:24:07.425355 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-10 14:24:07.425361 | orchestrator | + sleep 5 2026-01-10 14:24:12.430487 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:24:12.477242 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-10 14:24:12.477310 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-10 14:24:12.477317 | orchestrator | + sleep 5 2026-01-10 14:24:17.482454 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:24:17.526769 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-10 14:24:17.526906 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-10 14:24:17.526922 | orchestrator | + sleep 5 2026-01-10 14:24:22.531558 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:24:22.573196 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-10 14:24:22.573292 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-10 14:24:22.573316 | orchestrator | + sleep 5 2026-01-10 14:24:27.578551 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:24:27.623551 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-10 14:24:27.623661 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-10 14:24:27.623677 | orchestrator | + sleep 5 2026-01-10 14:24:32.629054 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:24:32.676176 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-10 14:24:32.676275 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-10 14:24:32.676291 | orchestrator | + sleep 5 2026-01-10 14:24:37.679980 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:24:37.858354 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-10 14:24:37.858440 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-10 14:24:37.858452 | orchestrator | + sleep 5 2026-01-10 14:24:42.862205 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:24:42.904236 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-10 14:24:43.009393 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-10 14:24:43.009466 | orchestrator | + sleep 5 2026-01-10 14:24:47.909077 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:24:47.950610 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-10 14:24:47.950715 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-10 14:24:47.950731 | orchestrator | + sleep 5 2026-01-10 14:24:52.954453 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:24:52.987532 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-10 14:24:52.987622 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-10 14:24:52.987635 | orchestrator | + sleep 5 2026-01-10 14:24:57.992133 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:24:58.036060 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-10 14:24:58.113583 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-10 14:24:58.113660 | orchestrator | + sleep 5 2026-01-10 14:25:03.040080 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:25:03.084124 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-10 14:25:03.084218 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-10 14:25:03.084233 | orchestrator | + local max_attempts=60 2026-01-10 14:25:03.084246 | orchestrator | + local name=kolla-ansible 2026-01-10 14:25:03.084258 | orchestrator | + local attempt_num=1 2026-01-10 14:25:03.085234 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-10 14:25:03.125590 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-10 14:25:03.125685 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-10 14:25:03.125700 | orchestrator | + local max_attempts=60 2026-01-10 14:25:03.125713 | orchestrator | + local name=osism-ansible 2026-01-10 14:25:03.125724 | orchestrator | + local attempt_num=1 2026-01-10 14:25:03.126211 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-10 14:25:03.166187 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-10 14:25:03.166254 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-10 14:25:03.166262 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-10 14:25:03.355842 | orchestrator | ARA in ceph-ansible already disabled. 2026-01-10 14:25:03.501182 | orchestrator | ARA in kolla-ansible already disabled. 2026-01-10 14:25:03.668537 | orchestrator | ARA in osism-ansible already disabled. 2026-01-10 14:25:03.817295 | orchestrator | ARA in osism-kubernetes already disabled. 2026-01-10 14:25:03.817659 | orchestrator | + osism apply gather-facts 2026-01-10 14:25:15.971268 | orchestrator | 2026-01-10 14:25:15 | INFO  | Task e296438b-78db-4af7-b935-87d32ed7c9a7 (gather-facts) was prepared for execution. 2026-01-10 14:25:15.971367 | orchestrator | 2026-01-10 14:25:15 | INFO  | It takes a moment until task e296438b-78db-4af7-b935-87d32ed7c9a7 (gather-facts) has been started and output is visible here. 2026-01-10 14:25:30.202587 | orchestrator | 2026-01-10 14:25:30.202706 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-10 14:25:30.202724 | orchestrator | 2026-01-10 14:25:30.202737 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-10 14:25:30.202749 | orchestrator | Saturday 10 January 2026 14:25:20 +0000 (0:00:00.222) 0:00:00.222 ****** 2026-01-10 14:25:30.202760 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:25:30.202772 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:25:30.202839 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:25:30.202852 | orchestrator | ok: [testbed-manager] 2026-01-10 14:25:30.202933 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:25:30.202948 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:25:30.202959 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:25:30.202970 | orchestrator | 2026-01-10 14:25:30.202982 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-10 14:25:30.202993 | orchestrator | 2026-01-10 14:25:30.203004 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-10 14:25:30.203016 | orchestrator | Saturday 10 January 2026 14:25:29 +0000 (0:00:09.027) 0:00:09.249 ****** 2026-01-10 14:25:30.203027 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:25:30.203039 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:25:30.203050 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:25:30.203061 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:25:30.203072 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:25:30.203083 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:30.203094 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:30.203104 | orchestrator | 2026-01-10 14:25:30.203115 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:25:30.203129 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:25:30.203143 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:25:30.203156 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:25:30.203168 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:25:30.203181 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:25:30.203224 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:25:30.203237 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:25:30.203254 | orchestrator | 2026-01-10 14:25:30.203273 | orchestrator | 2026-01-10 14:25:30.203292 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:25:30.203311 | orchestrator | Saturday 10 January 2026 14:25:29 +0000 (0:00:00.542) 0:00:09.792 ****** 2026-01-10 14:25:30.203332 | orchestrator | =============================================================================== 2026-01-10 14:25:30.203345 | orchestrator | Gathers facts about hosts ----------------------------------------------- 9.03s 2026-01-10 14:25:30.203358 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2026-01-10 14:25:30.567484 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-01-10 14:25:30.587590 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-01-10 14:25:30.608567 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-01-10 14:25:30.624539 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-01-10 14:25:30.644094 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-01-10 14:25:30.664219 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-01-10 14:25:30.678987 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-01-10 14:25:30.696722 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-01-10 14:25:30.714331 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-01-10 14:25:30.728566 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-01-10 14:25:30.741153 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-01-10 14:25:30.764183 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-01-10 14:25:30.784397 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-01-10 14:25:30.803389 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-01-10 14:25:30.824434 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-01-10 14:25:30.845446 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-01-10 14:25:30.858106 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-01-10 14:25:30.878700 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-01-10 14:25:30.899549 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-01-10 14:25:30.918769 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-01-10 14:25:30.938662 | orchestrator | + [[ false == \t\r\u\e ]] 2026-01-10 14:25:31.201006 | orchestrator | ok: Runtime: 0:25:04.171709 2026-01-10 14:25:31.322988 | 2026-01-10 14:25:31.323145 | TASK [Deploy services] 2026-01-10 14:25:31.859836 | orchestrator | skipping: Conditional result was False 2026-01-10 14:25:31.876085 | 2026-01-10 14:25:31.876291 | TASK [Deploy in a nutshell] 2026-01-10 14:25:32.614477 | orchestrator | + set -e 2026-01-10 14:25:32.614658 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-10 14:25:32.614692 | orchestrator | ++ export INTERACTIVE=false 2026-01-10 14:25:32.614715 | orchestrator | ++ INTERACTIVE=false 2026-01-10 14:25:32.614755 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-10 14:25:32.614769 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-10 14:25:32.614838 | orchestrator | + source /opt/manager-vars.sh 2026-01-10 14:25:32.614884 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-10 14:25:32.614922 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-10 14:25:32.614937 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-10 14:25:32.614953 | orchestrator | ++ CEPH_VERSION=reef 2026-01-10 14:25:32.614966 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-10 14:25:32.614984 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-10 14:25:32.614995 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-10 14:25:32.615015 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-10 14:25:32.615027 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-10 14:25:32.615041 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-10 14:25:32.615052 | orchestrator | ++ export ARA=false 2026-01-10 14:25:32.615063 | orchestrator | ++ ARA=false 2026-01-10 14:25:32.615074 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-10 14:25:32.615090 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-10 14:25:32.615117 | orchestrator | ++ export TEMPEST=false 2026-01-10 14:25:32.615128 | orchestrator | ++ TEMPEST=false 2026-01-10 14:25:32.615150 | orchestrator | ++ export IS_ZUUL=true 2026-01-10 14:25:32.615162 | orchestrator | ++ IS_ZUUL=true 2026-01-10 14:25:32.615173 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.106 2026-01-10 14:25:32.615184 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.106 2026-01-10 14:25:32.615195 | orchestrator | ++ export EXTERNAL_API=false 2026-01-10 14:25:32.615206 | orchestrator | ++ EXTERNAL_API=false 2026-01-10 14:25:32.615217 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-10 14:25:32.615228 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-10 14:25:32.615239 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-10 14:25:32.615254 | orchestrator | 2026-01-10 14:25:32.615267 | orchestrator | # PULL IMAGES 2026-01-10 14:25:32.615278 | orchestrator | 2026-01-10 14:25:32.615289 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-10 14:25:32.615300 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-10 14:25:32.615318 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-10 14:25:32.615329 | orchestrator | + echo 2026-01-10 14:25:32.615340 | orchestrator | + echo '# PULL IMAGES' 2026-01-10 14:25:32.615351 | orchestrator | + echo 2026-01-10 14:25:32.616708 | orchestrator | ++ semver 9.5.0 7.0.0 2026-01-10 14:25:32.691126 | orchestrator | + [[ 1 -ge 0 ]] 2026-01-10 14:25:32.691227 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-01-10 14:25:34.704204 | orchestrator | 2026-01-10 14:25:34 | INFO  | Trying to run play pull-images in environment custom 2026-01-10 14:25:44.794450 | orchestrator | 2026-01-10 14:25:44 | INFO  | Task 88f36698-683b-48c1-aab2-035d97ce2ccf (pull-images) was prepared for execution. 2026-01-10 14:25:44.794577 | orchestrator | 2026-01-10 14:25:44 | INFO  | Task 88f36698-683b-48c1-aab2-035d97ce2ccf is running in background. No more output. Check ARA for logs. 2026-01-10 14:25:47.087857 | orchestrator | 2026-01-10 14:25:47 | INFO  | Trying to run play wipe-partitions in environment custom 2026-01-10 14:25:57.272369 | orchestrator | 2026-01-10 14:25:57 | INFO  | Task 70ee2adb-7d14-4d4a-ab48-dd6dec8a86fb (wipe-partitions) was prepared for execution. 2026-01-10 14:25:57.272506 | orchestrator | 2026-01-10 14:25:57 | INFO  | It takes a moment until task 70ee2adb-7d14-4d4a-ab48-dd6dec8a86fb (wipe-partitions) has been started and output is visible here. 2026-01-10 14:26:10.624742 | orchestrator | 2026-01-10 14:26:10.624837 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-01-10 14:26:10.624900 | orchestrator | 2026-01-10 14:26:10.624913 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-01-10 14:26:10.624929 | orchestrator | Saturday 10 January 2026 14:26:01 +0000 (0:00:00.128) 0:00:00.128 ****** 2026-01-10 14:26:10.624937 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:26:10.624945 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:26:10.624953 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:26:10.624960 | orchestrator | 2026-01-10 14:26:10.624968 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-01-10 14:26:10.624996 | orchestrator | Saturday 10 January 2026 14:26:02 +0000 (0:00:00.589) 0:00:00.717 ****** 2026-01-10 14:26:10.625004 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:26:10.625012 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:26:10.625019 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:26:10.625029 | orchestrator | 2026-01-10 14:26:10.625041 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-01-10 14:26:10.625054 | orchestrator | Saturday 10 January 2026 14:26:02 +0000 (0:00:00.384) 0:00:01.101 ****** 2026-01-10 14:26:10.625066 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:26:10.625078 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:26:10.625089 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:26:10.625100 | orchestrator | 2026-01-10 14:26:10.625112 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-01-10 14:26:10.625124 | orchestrator | Saturday 10 January 2026 14:26:03 +0000 (0:00:00.606) 0:00:01.708 ****** 2026-01-10 14:26:10.625136 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:26:10.625148 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:26:10.625160 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:26:10.625171 | orchestrator | 2026-01-10 14:26:10.625182 | orchestrator | TASK [Check device availability] *********************************************** 2026-01-10 14:26:10.625189 | orchestrator | Saturday 10 January 2026 14:26:03 +0000 (0:00:00.286) 0:00:01.995 ****** 2026-01-10 14:26:10.625197 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-10 14:26:10.625209 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-10 14:26:10.625222 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-10 14:26:10.625235 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-10 14:26:10.625247 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-10 14:26:10.625258 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-10 14:26:10.625271 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-10 14:26:10.625284 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-10 14:26:10.625297 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-10 14:26:10.625308 | orchestrator | 2026-01-10 14:26:10.625316 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-01-10 14:26:10.625325 | orchestrator | Saturday 10 January 2026 14:26:05 +0000 (0:00:01.311) 0:00:03.306 ****** 2026-01-10 14:26:10.625333 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-01-10 14:26:10.625342 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-01-10 14:26:10.625350 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-01-10 14:26:10.625358 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-01-10 14:26:10.625366 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-01-10 14:26:10.625374 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-01-10 14:26:10.625383 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-01-10 14:26:10.625390 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-01-10 14:26:10.625399 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-01-10 14:26:10.625407 | orchestrator | 2026-01-10 14:26:10.625415 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-01-10 14:26:10.625423 | orchestrator | Saturday 10 January 2026 14:26:06 +0000 (0:00:01.649) 0:00:04.956 ****** 2026-01-10 14:26:10.625432 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-10 14:26:10.625442 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-10 14:26:10.625452 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-10 14:26:10.625462 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-10 14:26:10.625472 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-10 14:26:10.625482 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-10 14:26:10.625496 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-10 14:26:10.625511 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-10 14:26:10.625537 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-10 14:26:10.625548 | orchestrator | 2026-01-10 14:26:10.625558 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-01-10 14:26:10.625568 | orchestrator | Saturday 10 January 2026 14:26:08 +0000 (0:00:02.180) 0:00:07.137 ****** 2026-01-10 14:26:10.625578 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:26:10.625586 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:26:10.625595 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:26:10.625603 | orchestrator | 2026-01-10 14:26:10.625612 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-01-10 14:26:10.625621 | orchestrator | Saturday 10 January 2026 14:26:09 +0000 (0:00:00.653) 0:00:07.791 ****** 2026-01-10 14:26:10.625629 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:26:10.625704 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:26:10.625714 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:26:10.625722 | orchestrator | 2026-01-10 14:26:10.625731 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:26:10.625741 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:26:10.625753 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:26:10.625780 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:26:10.625790 | orchestrator | 2026-01-10 14:26:10.625798 | orchestrator | 2026-01-10 14:26:10.625807 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:26:10.625816 | orchestrator | Saturday 10 January 2026 14:26:10 +0000 (0:00:00.648) 0:00:08.440 ****** 2026-01-10 14:26:10.625824 | orchestrator | =============================================================================== 2026-01-10 14:26:10.625833 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.18s 2026-01-10 14:26:10.625841 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.65s 2026-01-10 14:26:10.625870 | orchestrator | Check device availability ----------------------------------------------- 1.31s 2026-01-10 14:26:10.625879 | orchestrator | Reload udev rules ------------------------------------------------------- 0.65s 2026-01-10 14:26:10.625887 | orchestrator | Request device events from the kernel ----------------------------------- 0.65s 2026-01-10 14:26:10.625896 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.61s 2026-01-10 14:26:10.625904 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.59s 2026-01-10 14:26:10.625913 | orchestrator | Remove all rook related logical devices --------------------------------- 0.38s 2026-01-10 14:26:10.625921 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.29s 2026-01-10 14:26:23.014737 | orchestrator | 2026-01-10 14:26:23 | INFO  | Task 296a87d4-da79-4456-b28b-5d84e21a64b0 (facts) was prepared for execution. 2026-01-10 14:26:23.014840 | orchestrator | 2026-01-10 14:26:23 | INFO  | It takes a moment until task 296a87d4-da79-4456-b28b-5d84e21a64b0 (facts) has been started and output is visible here. 2026-01-10 14:26:35.594669 | orchestrator | 2026-01-10 14:26:35.594820 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-10 14:26:35.594835 | orchestrator | 2026-01-10 14:26:35.594846 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-10 14:26:35.594857 | orchestrator | Saturday 10 January 2026 14:26:27 +0000 (0:00:00.286) 0:00:00.286 ****** 2026-01-10 14:26:35.594868 | orchestrator | ok: [testbed-manager] 2026-01-10 14:26:35.594879 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:26:35.594889 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:26:35.594899 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:26:35.595000 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:26:35.595012 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:26:35.595022 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:26:35.595031 | orchestrator | 2026-01-10 14:26:35.595041 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-10 14:26:35.595051 | orchestrator | Saturday 10 January 2026 14:26:28 +0000 (0:00:01.128) 0:00:01.414 ****** 2026-01-10 14:26:35.595061 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:26:35.595071 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:26:35.595081 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:26:35.595090 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:26:35.595100 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:26:35.595109 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:26:35.595119 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:26:35.595128 | orchestrator | 2026-01-10 14:26:35.595138 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-10 14:26:35.595148 | orchestrator | 2026-01-10 14:26:35.595177 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-10 14:26:35.595189 | orchestrator | Saturday 10 January 2026 14:26:29 +0000 (0:00:01.327) 0:00:02.742 ****** 2026-01-10 14:26:35.595200 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:26:35.595211 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:26:35.595221 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:26:35.595233 | orchestrator | ok: [testbed-manager] 2026-01-10 14:26:35.595244 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:26:35.595254 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:26:35.595265 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:26:35.595275 | orchestrator | 2026-01-10 14:26:35.595286 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-10 14:26:35.595297 | orchestrator | 2026-01-10 14:26:35.595308 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-10 14:26:35.595320 | orchestrator | Saturday 10 January 2026 14:26:34 +0000 (0:00:05.017) 0:00:07.759 ****** 2026-01-10 14:26:35.595330 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:26:35.595341 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:26:35.595352 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:26:35.595364 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:26:35.595375 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:26:35.595385 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:26:35.595396 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:26:35.595406 | orchestrator | 2026-01-10 14:26:35.595417 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:26:35.595429 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:26:35.595440 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:26:35.595450 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:26:35.595460 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:26:35.595469 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:26:35.595479 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:26:35.595488 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:26:35.595498 | orchestrator | 2026-01-10 14:26:35.595507 | orchestrator | 2026-01-10 14:26:35.595517 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:26:35.595537 | orchestrator | Saturday 10 January 2026 14:26:35 +0000 (0:00:00.480) 0:00:08.240 ****** 2026-01-10 14:26:35.595547 | orchestrator | =============================================================================== 2026-01-10 14:26:35.595557 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.02s 2026-01-10 14:26:35.595566 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.33s 2026-01-10 14:26:35.595576 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.13s 2026-01-10 14:26:35.595586 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.48s 2026-01-10 14:26:37.804374 | orchestrator | 2026-01-10 14:26:37 | INFO  | Task a1f39ba7-7bf3-4ef4-abb3-82afb86a0edd (ceph-configure-lvm-volumes) was prepared for execution. 2026-01-10 14:26:37.804504 | orchestrator | 2026-01-10 14:26:37 | INFO  | It takes a moment until task a1f39ba7-7bf3-4ef4-abb3-82afb86a0edd (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-01-10 14:26:49.403581 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-10 14:26:49.403695 | orchestrator | 2.16.14 2026-01-10 14:26:49.403712 | orchestrator | 2026-01-10 14:26:49.403724 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-10 14:26:49.403735 | orchestrator | 2026-01-10 14:26:49.403745 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-10 14:26:49.403756 | orchestrator | Saturday 10 January 2026 14:26:42 +0000 (0:00:00.332) 0:00:00.332 ****** 2026-01-10 14:26:49.403766 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-10 14:26:49.403776 | orchestrator | 2026-01-10 14:26:49.403786 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-10 14:26:49.403795 | orchestrator | Saturday 10 January 2026 14:26:42 +0000 (0:00:00.245) 0:00:00.578 ****** 2026-01-10 14:26:49.403805 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:26:49.403815 | orchestrator | 2026-01-10 14:26:49.403825 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:26:49.403834 | orchestrator | Saturday 10 January 2026 14:26:42 +0000 (0:00:00.233) 0:00:00.812 ****** 2026-01-10 14:26:49.403844 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-10 14:26:49.403863 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-10 14:26:49.403873 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-10 14:26:49.403883 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-10 14:26:49.403892 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-10 14:26:49.403902 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-10 14:26:49.403911 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-10 14:26:49.403921 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-10 14:26:49.403930 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-10 14:26:49.403940 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-10 14:26:49.403949 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-10 14:26:49.403959 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-10 14:26:49.403968 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-10 14:26:49.404034 | orchestrator | 2026-01-10 14:26:49.404046 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:26:49.404056 | orchestrator | Saturday 10 January 2026 14:26:43 +0000 (0:00:00.494) 0:00:01.306 ****** 2026-01-10 14:26:49.404085 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:26:49.404096 | orchestrator | 2026-01-10 14:26:49.404108 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:26:49.404118 | orchestrator | Saturday 10 January 2026 14:26:43 +0000 (0:00:00.209) 0:00:01.516 ****** 2026-01-10 14:26:49.404130 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:26:49.404140 | orchestrator | 2026-01-10 14:26:49.404150 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:26:49.404161 | orchestrator | Saturday 10 January 2026 14:26:43 +0000 (0:00:00.185) 0:00:01.701 ****** 2026-01-10 14:26:49.404172 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:26:49.404183 | orchestrator | 2026-01-10 14:26:49.404194 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:26:49.404206 | orchestrator | Saturday 10 January 2026 14:26:43 +0000 (0:00:00.197) 0:00:01.899 ****** 2026-01-10 14:26:49.404221 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:26:49.404232 | orchestrator | 2026-01-10 14:26:49.404242 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:26:49.404253 | orchestrator | Saturday 10 January 2026 14:26:44 +0000 (0:00:00.224) 0:00:02.124 ****** 2026-01-10 14:26:49.404264 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:26:49.404275 | orchestrator | 2026-01-10 14:26:49.404286 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:26:49.404297 | orchestrator | Saturday 10 January 2026 14:26:44 +0000 (0:00:00.228) 0:00:02.353 ****** 2026-01-10 14:26:49.404308 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:26:49.404318 | orchestrator | 2026-01-10 14:26:49.404330 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:26:49.404341 | orchestrator | Saturday 10 January 2026 14:26:44 +0000 (0:00:00.219) 0:00:02.572 ****** 2026-01-10 14:26:49.404352 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:26:49.404362 | orchestrator | 2026-01-10 14:26:49.404373 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:26:49.404384 | orchestrator | Saturday 10 January 2026 14:26:44 +0000 (0:00:00.210) 0:00:02.783 ****** 2026-01-10 14:26:49.404395 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:26:49.404405 | orchestrator | 2026-01-10 14:26:49.404416 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:26:49.404427 | orchestrator | Saturday 10 January 2026 14:26:44 +0000 (0:00:00.219) 0:00:03.003 ****** 2026-01-10 14:26:49.404438 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927) 2026-01-10 14:26:49.404451 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927) 2026-01-10 14:26:49.404461 | orchestrator | 2026-01-10 14:26:49.404470 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:26:49.404496 | orchestrator | Saturday 10 January 2026 14:26:45 +0000 (0:00:00.417) 0:00:03.421 ****** 2026-01-10 14:26:49.404506 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_15b7e91d-d5fd-4068-ac98-0857e3d5fdf2) 2026-01-10 14:26:49.404521 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_15b7e91d-d5fd-4068-ac98-0857e3d5fdf2) 2026-01-10 14:26:49.404532 | orchestrator | 2026-01-10 14:26:49.404541 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:26:49.404551 | orchestrator | Saturday 10 January 2026 14:26:46 +0000 (0:00:00.635) 0:00:04.056 ****** 2026-01-10 14:26:49.404561 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b1ccfdf8-8ebc-4581-b39b-71c057731eea) 2026-01-10 14:26:49.404570 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b1ccfdf8-8ebc-4581-b39b-71c057731eea) 2026-01-10 14:26:49.404580 | orchestrator | 2026-01-10 14:26:49.404590 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:26:49.404599 | orchestrator | Saturday 10 January 2026 14:26:46 +0000 (0:00:00.662) 0:00:04.719 ****** 2026-01-10 14:26:49.404616 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c3c9ac61-c03e-421c-9f43-37b1f8399a84) 2026-01-10 14:26:49.404626 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c3c9ac61-c03e-421c-9f43-37b1f8399a84) 2026-01-10 14:26:49.404636 | orchestrator | 2026-01-10 14:26:49.404645 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:26:49.404655 | orchestrator | Saturday 10 January 2026 14:26:47 +0000 (0:00:00.674) 0:00:05.393 ****** 2026-01-10 14:26:49.404664 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-10 14:26:49.404674 | orchestrator | 2026-01-10 14:26:49.404683 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:26:49.404693 | orchestrator | Saturday 10 January 2026 14:26:47 +0000 (0:00:00.323) 0:00:05.716 ****** 2026-01-10 14:26:49.404702 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-10 14:26:49.404712 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-10 14:26:49.404721 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-10 14:26:49.404731 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-10 14:26:49.404740 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-10 14:26:49.404750 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-10 14:26:49.404759 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-10 14:26:49.404769 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-10 14:26:49.404779 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-10 14:26:49.404789 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-10 14:26:49.404798 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-10 14:26:49.404807 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-10 14:26:49.404817 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-10 14:26:49.404826 | orchestrator | 2026-01-10 14:26:49.404836 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:26:49.404846 | orchestrator | Saturday 10 January 2026 14:26:48 +0000 (0:00:00.359) 0:00:06.076 ****** 2026-01-10 14:26:49.404855 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:26:49.404865 | orchestrator | 2026-01-10 14:26:49.404875 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:26:49.404884 | orchestrator | Saturday 10 January 2026 14:26:48 +0000 (0:00:00.196) 0:00:06.272 ****** 2026-01-10 14:26:49.404893 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:26:49.404903 | orchestrator | 2026-01-10 14:26:49.404912 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:26:49.404922 | orchestrator | Saturday 10 January 2026 14:26:48 +0000 (0:00:00.199) 0:00:06.472 ****** 2026-01-10 14:26:49.404931 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:26:49.404941 | orchestrator | 2026-01-10 14:26:49.404950 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:26:49.404960 | orchestrator | Saturday 10 January 2026 14:26:48 +0000 (0:00:00.200) 0:00:06.672 ****** 2026-01-10 14:26:49.404969 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:26:49.404998 | orchestrator | 2026-01-10 14:26:49.405008 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:26:49.405018 | orchestrator | Saturday 10 January 2026 14:26:48 +0000 (0:00:00.199) 0:00:06.871 ****** 2026-01-10 14:26:49.405033 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:26:49.405043 | orchestrator | 2026-01-10 14:26:49.405052 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:26:49.405062 | orchestrator | Saturday 10 January 2026 14:26:48 +0000 (0:00:00.167) 0:00:07.039 ****** 2026-01-10 14:26:49.405072 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:26:49.405081 | orchestrator | 2026-01-10 14:26:49.405091 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:26:49.405100 | orchestrator | Saturday 10 January 2026 14:26:49 +0000 (0:00:00.221) 0:00:07.260 ****** 2026-01-10 14:26:49.405110 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:26:49.405119 | orchestrator | 2026-01-10 14:26:49.405134 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:26:56.622429 | orchestrator | Saturday 10 January 2026 14:26:49 +0000 (0:00:00.185) 0:00:07.445 ****** 2026-01-10 14:26:56.622547 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:26:56.622568 | orchestrator | 2026-01-10 14:26:56.622582 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:26:56.622594 | orchestrator | Saturday 10 January 2026 14:26:49 +0000 (0:00:00.183) 0:00:07.629 ****** 2026-01-10 14:26:56.622604 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-10 14:26:56.622635 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-10 14:26:56.622647 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-10 14:26:56.622658 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-10 14:26:56.622668 | orchestrator | 2026-01-10 14:26:56.622679 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:26:56.622690 | orchestrator | Saturday 10 January 2026 14:26:50 +0000 (0:00:00.846) 0:00:08.476 ****** 2026-01-10 14:26:56.622701 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:26:56.622711 | orchestrator | 2026-01-10 14:26:56.622722 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:26:56.622733 | orchestrator | Saturday 10 January 2026 14:26:50 +0000 (0:00:00.207) 0:00:08.683 ****** 2026-01-10 14:26:56.622744 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:26:56.622754 | orchestrator | 2026-01-10 14:26:56.622765 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:26:56.622775 | orchestrator | Saturday 10 January 2026 14:26:50 +0000 (0:00:00.194) 0:00:08.878 ****** 2026-01-10 14:26:56.622786 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:26:56.622796 | orchestrator | 2026-01-10 14:26:56.622807 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:26:56.622818 | orchestrator | Saturday 10 January 2026 14:26:51 +0000 (0:00:00.201) 0:00:09.079 ****** 2026-01-10 14:26:56.622828 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:26:56.622839 | orchestrator | 2026-01-10 14:26:56.622849 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-10 14:26:56.622860 | orchestrator | Saturday 10 January 2026 14:26:51 +0000 (0:00:00.197) 0:00:09.277 ****** 2026-01-10 14:26:56.622870 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-01-10 14:26:56.622881 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-01-10 14:26:56.622892 | orchestrator | 2026-01-10 14:26:56.622902 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-10 14:26:56.622913 | orchestrator | Saturday 10 January 2026 14:26:51 +0000 (0:00:00.162) 0:00:09.439 ****** 2026-01-10 14:26:56.622923 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:26:56.622935 | orchestrator | 2026-01-10 14:26:56.622954 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-10 14:26:56.622968 | orchestrator | Saturday 10 January 2026 14:26:51 +0000 (0:00:00.130) 0:00:09.570 ****** 2026-01-10 14:26:56.622979 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:26:56.622989 | orchestrator | 2026-01-10 14:26:56.623031 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-10 14:26:56.623045 | orchestrator | Saturday 10 January 2026 14:26:51 +0000 (0:00:00.110) 0:00:09.680 ****** 2026-01-10 14:26:56.623080 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:26:56.623091 | orchestrator | 2026-01-10 14:26:56.623102 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-10 14:26:56.623113 | orchestrator | Saturday 10 January 2026 14:26:51 +0000 (0:00:00.148) 0:00:09.829 ****** 2026-01-10 14:26:56.623123 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:26:56.623134 | orchestrator | 2026-01-10 14:26:56.623145 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-10 14:26:56.623156 | orchestrator | Saturday 10 January 2026 14:26:51 +0000 (0:00:00.127) 0:00:09.957 ****** 2026-01-10 14:26:56.623168 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'afcf3728-3a76-5607-aebb-61451d8643bd'}}) 2026-01-10 14:26:56.623179 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7d69473f-eeb6-5b22-bf27-181ed9eac77f'}}) 2026-01-10 14:26:56.623190 | orchestrator | 2026-01-10 14:26:56.623201 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-10 14:26:56.623212 | orchestrator | Saturday 10 January 2026 14:26:52 +0000 (0:00:00.167) 0:00:10.124 ****** 2026-01-10 14:26:56.623224 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'afcf3728-3a76-5607-aebb-61451d8643bd'}})  2026-01-10 14:26:56.623243 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7d69473f-eeb6-5b22-bf27-181ed9eac77f'}})  2026-01-10 14:26:56.623254 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:26:56.623265 | orchestrator | 2026-01-10 14:26:56.623275 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-10 14:26:56.623286 | orchestrator | Saturday 10 January 2026 14:26:52 +0000 (0:00:00.147) 0:00:10.272 ****** 2026-01-10 14:26:56.623296 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'afcf3728-3a76-5607-aebb-61451d8643bd'}})  2026-01-10 14:26:56.623307 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7d69473f-eeb6-5b22-bf27-181ed9eac77f'}})  2026-01-10 14:26:56.623318 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:26:56.623328 | orchestrator | 2026-01-10 14:26:56.623339 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-10 14:26:56.623350 | orchestrator | Saturday 10 January 2026 14:26:52 +0000 (0:00:00.299) 0:00:10.571 ****** 2026-01-10 14:26:56.623361 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'afcf3728-3a76-5607-aebb-61451d8643bd'}})  2026-01-10 14:26:56.623389 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7d69473f-eeb6-5b22-bf27-181ed9eac77f'}})  2026-01-10 14:26:56.623401 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:26:56.623412 | orchestrator | 2026-01-10 14:26:56.623423 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-10 14:26:56.623433 | orchestrator | Saturday 10 January 2026 14:26:52 +0000 (0:00:00.153) 0:00:10.725 ****** 2026-01-10 14:26:56.623444 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:26:56.623454 | orchestrator | 2026-01-10 14:26:56.623465 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-10 14:26:56.623475 | orchestrator | Saturday 10 January 2026 14:26:52 +0000 (0:00:00.118) 0:00:10.844 ****** 2026-01-10 14:26:56.623486 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:26:56.623496 | orchestrator | 2026-01-10 14:26:56.623507 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-10 14:26:56.623517 | orchestrator | Saturday 10 January 2026 14:26:52 +0000 (0:00:00.133) 0:00:10.977 ****** 2026-01-10 14:26:56.623528 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:26:56.623538 | orchestrator | 2026-01-10 14:26:56.623549 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-10 14:26:56.623559 | orchestrator | Saturday 10 January 2026 14:26:53 +0000 (0:00:00.160) 0:00:11.138 ****** 2026-01-10 14:26:56.623578 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:26:56.623589 | orchestrator | 2026-01-10 14:26:56.623600 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-10 14:26:56.623610 | orchestrator | Saturday 10 January 2026 14:26:53 +0000 (0:00:00.128) 0:00:11.266 ****** 2026-01-10 14:26:56.623621 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:26:56.623632 | orchestrator | 2026-01-10 14:26:56.623642 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-10 14:26:56.623653 | orchestrator | Saturday 10 January 2026 14:26:53 +0000 (0:00:00.122) 0:00:11.388 ****** 2026-01-10 14:26:56.623664 | orchestrator | ok: [testbed-node-3] => { 2026-01-10 14:26:56.623675 | orchestrator |  "ceph_osd_devices": { 2026-01-10 14:26:56.623686 | orchestrator |  "sdb": { 2026-01-10 14:26:56.623697 | orchestrator |  "osd_lvm_uuid": "afcf3728-3a76-5607-aebb-61451d8643bd" 2026-01-10 14:26:56.623708 | orchestrator |  }, 2026-01-10 14:26:56.623719 | orchestrator |  "sdc": { 2026-01-10 14:26:56.623730 | orchestrator |  "osd_lvm_uuid": "7d69473f-eeb6-5b22-bf27-181ed9eac77f" 2026-01-10 14:26:56.623741 | orchestrator |  } 2026-01-10 14:26:56.623752 | orchestrator |  } 2026-01-10 14:26:56.623763 | orchestrator | } 2026-01-10 14:26:56.623773 | orchestrator | 2026-01-10 14:26:56.623784 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-10 14:26:56.623801 | orchestrator | Saturday 10 January 2026 14:26:53 +0000 (0:00:00.126) 0:00:11.514 ****** 2026-01-10 14:26:56.623811 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:26:56.623822 | orchestrator | 2026-01-10 14:26:56.623833 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-10 14:26:56.623843 | orchestrator | Saturday 10 January 2026 14:26:53 +0000 (0:00:00.151) 0:00:11.666 ****** 2026-01-10 14:26:56.623853 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:26:56.623864 | orchestrator | 2026-01-10 14:26:56.623875 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-10 14:26:56.623885 | orchestrator | Saturday 10 January 2026 14:26:53 +0000 (0:00:00.112) 0:00:11.779 ****** 2026-01-10 14:26:56.623896 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:26:56.623906 | orchestrator | 2026-01-10 14:26:56.623916 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-10 14:26:56.623927 | orchestrator | Saturday 10 January 2026 14:26:53 +0000 (0:00:00.128) 0:00:11.907 ****** 2026-01-10 14:26:56.623937 | orchestrator | changed: [testbed-node-3] => { 2026-01-10 14:26:56.623948 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-10 14:26:56.623959 | orchestrator |  "ceph_osd_devices": { 2026-01-10 14:26:56.623969 | orchestrator |  "sdb": { 2026-01-10 14:26:56.623980 | orchestrator |  "osd_lvm_uuid": "afcf3728-3a76-5607-aebb-61451d8643bd" 2026-01-10 14:26:56.623991 | orchestrator |  }, 2026-01-10 14:26:56.624028 | orchestrator |  "sdc": { 2026-01-10 14:26:56.624040 | orchestrator |  "osd_lvm_uuid": "7d69473f-eeb6-5b22-bf27-181ed9eac77f" 2026-01-10 14:26:56.624051 | orchestrator |  } 2026-01-10 14:26:56.624062 | orchestrator |  }, 2026-01-10 14:26:56.624073 | orchestrator |  "lvm_volumes": [ 2026-01-10 14:26:56.624083 | orchestrator |  { 2026-01-10 14:26:56.624094 | orchestrator |  "data": "osd-block-afcf3728-3a76-5607-aebb-61451d8643bd", 2026-01-10 14:26:56.624105 | orchestrator |  "data_vg": "ceph-afcf3728-3a76-5607-aebb-61451d8643bd" 2026-01-10 14:26:56.624115 | orchestrator |  }, 2026-01-10 14:26:56.624126 | orchestrator |  { 2026-01-10 14:26:56.624137 | orchestrator |  "data": "osd-block-7d69473f-eeb6-5b22-bf27-181ed9eac77f", 2026-01-10 14:26:56.624147 | orchestrator |  "data_vg": "ceph-7d69473f-eeb6-5b22-bf27-181ed9eac77f" 2026-01-10 14:26:56.624158 | orchestrator |  } 2026-01-10 14:26:56.624169 | orchestrator |  ] 2026-01-10 14:26:56.624179 | orchestrator |  } 2026-01-10 14:26:56.624190 | orchestrator | } 2026-01-10 14:26:56.624208 | orchestrator | 2026-01-10 14:26:56.624219 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-10 14:26:56.624229 | orchestrator | Saturday 10 January 2026 14:26:54 +0000 (0:00:00.350) 0:00:12.258 ****** 2026-01-10 14:26:56.624240 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-10 14:26:56.624250 | orchestrator | 2026-01-10 14:26:56.624261 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-10 14:26:56.624271 | orchestrator | 2026-01-10 14:26:56.624282 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-10 14:26:56.624293 | orchestrator | Saturday 10 January 2026 14:26:56 +0000 (0:00:01.833) 0:00:14.092 ****** 2026-01-10 14:26:56.624303 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-10 14:26:56.624314 | orchestrator | 2026-01-10 14:26:56.624325 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-10 14:26:56.624335 | orchestrator | Saturday 10 January 2026 14:26:56 +0000 (0:00:00.328) 0:00:14.420 ****** 2026-01-10 14:26:56.624346 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:26:56.624357 | orchestrator | 2026-01-10 14:26:56.624374 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:04.569935 | orchestrator | Saturday 10 January 2026 14:26:56 +0000 (0:00:00.247) 0:00:14.668 ****** 2026-01-10 14:27:04.570151 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-10 14:27:04.570172 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-10 14:27:04.570185 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-10 14:27:04.570196 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-10 14:27:04.570207 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-10 14:27:04.570218 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-10 14:27:04.570230 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-10 14:27:04.570260 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-10 14:27:04.570272 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-10 14:27:04.570284 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-10 14:27:04.570295 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-10 14:27:04.570306 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-10 14:27:04.570322 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-10 14:27:04.570333 | orchestrator | 2026-01-10 14:27:04.570346 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:04.570358 | orchestrator | Saturday 10 January 2026 14:26:56 +0000 (0:00:00.381) 0:00:15.050 ****** 2026-01-10 14:27:04.570369 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:04.570381 | orchestrator | 2026-01-10 14:27:04.570392 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:04.570403 | orchestrator | Saturday 10 January 2026 14:26:57 +0000 (0:00:00.187) 0:00:15.237 ****** 2026-01-10 14:27:04.570415 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:04.570426 | orchestrator | 2026-01-10 14:27:04.570437 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:04.570448 | orchestrator | Saturday 10 January 2026 14:26:57 +0000 (0:00:00.204) 0:00:15.442 ****** 2026-01-10 14:27:04.570459 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:04.570472 | orchestrator | 2026-01-10 14:27:04.570484 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:04.570497 | orchestrator | Saturday 10 January 2026 14:26:57 +0000 (0:00:00.202) 0:00:15.645 ****** 2026-01-10 14:27:04.570533 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:04.570547 | orchestrator | 2026-01-10 14:27:04.570560 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:04.570572 | orchestrator | Saturday 10 January 2026 14:26:57 +0000 (0:00:00.189) 0:00:15.835 ****** 2026-01-10 14:27:04.570585 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:04.570598 | orchestrator | 2026-01-10 14:27:04.570610 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:04.570623 | orchestrator | Saturday 10 January 2026 14:26:58 +0000 (0:00:00.618) 0:00:16.454 ****** 2026-01-10 14:27:04.570636 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:04.570648 | orchestrator | 2026-01-10 14:27:04.570660 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:04.570671 | orchestrator | Saturday 10 January 2026 14:26:58 +0000 (0:00:00.210) 0:00:16.665 ****** 2026-01-10 14:27:04.570682 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:04.570693 | orchestrator | 2026-01-10 14:27:04.570704 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:04.570715 | orchestrator | Saturday 10 January 2026 14:26:58 +0000 (0:00:00.230) 0:00:16.895 ****** 2026-01-10 14:27:04.570726 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:04.570736 | orchestrator | 2026-01-10 14:27:04.570747 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:04.570758 | orchestrator | Saturday 10 January 2026 14:26:59 +0000 (0:00:00.203) 0:00:17.098 ****** 2026-01-10 14:27:04.570769 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51) 2026-01-10 14:27:04.570781 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51) 2026-01-10 14:27:04.570792 | orchestrator | 2026-01-10 14:27:04.570803 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:04.570814 | orchestrator | Saturday 10 January 2026 14:26:59 +0000 (0:00:00.474) 0:00:17.573 ****** 2026-01-10 14:27:04.570825 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_44339de3-07bb-4d03-9d3b-2e0777e51af2) 2026-01-10 14:27:04.570836 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_44339de3-07bb-4d03-9d3b-2e0777e51af2) 2026-01-10 14:27:04.570847 | orchestrator | 2026-01-10 14:27:04.570858 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:04.570869 | orchestrator | Saturday 10 January 2026 14:26:59 +0000 (0:00:00.416) 0:00:17.990 ****** 2026-01-10 14:27:04.570880 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_09bcf5bb-a136-4209-bef3-37a648ec73be) 2026-01-10 14:27:04.570891 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_09bcf5bb-a136-4209-bef3-37a648ec73be) 2026-01-10 14:27:04.570903 | orchestrator | 2026-01-10 14:27:04.570914 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:04.570943 | orchestrator | Saturday 10 January 2026 14:27:00 +0000 (0:00:00.435) 0:00:18.425 ****** 2026-01-10 14:27:04.570955 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c56b24e3-125a-48ee-acc4-7420ce900c20) 2026-01-10 14:27:04.570967 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c56b24e3-125a-48ee-acc4-7420ce900c20) 2026-01-10 14:27:04.570978 | orchestrator | 2026-01-10 14:27:04.570995 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:04.571007 | orchestrator | Saturday 10 January 2026 14:27:00 +0000 (0:00:00.422) 0:00:18.848 ****** 2026-01-10 14:27:04.571018 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-10 14:27:04.571050 | orchestrator | 2026-01-10 14:27:04.571062 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:04.571073 | orchestrator | Saturday 10 January 2026 14:27:01 +0000 (0:00:00.314) 0:00:19.162 ****** 2026-01-10 14:27:04.571083 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-10 14:27:04.571104 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-10 14:27:04.571115 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-10 14:27:04.571126 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-10 14:27:04.571136 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-10 14:27:04.571147 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-10 14:27:04.571158 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-10 14:27:04.571169 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-10 14:27:04.571179 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-10 14:27:04.571190 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-10 14:27:04.571201 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-10 14:27:04.571211 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-10 14:27:04.571222 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-10 14:27:04.571233 | orchestrator | 2026-01-10 14:27:04.571243 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:04.571254 | orchestrator | Saturday 10 January 2026 14:27:01 +0000 (0:00:00.367) 0:00:19.529 ****** 2026-01-10 14:27:04.571265 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:04.571276 | orchestrator | 2026-01-10 14:27:04.571286 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:04.571297 | orchestrator | Saturday 10 January 2026 14:27:02 +0000 (0:00:00.701) 0:00:20.231 ****** 2026-01-10 14:27:04.571308 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:04.571319 | orchestrator | 2026-01-10 14:27:04.571329 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:04.571340 | orchestrator | Saturday 10 January 2026 14:27:02 +0000 (0:00:00.197) 0:00:20.429 ****** 2026-01-10 14:27:04.571351 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:04.571361 | orchestrator | 2026-01-10 14:27:04.571372 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:04.571383 | orchestrator | Saturday 10 January 2026 14:27:02 +0000 (0:00:00.226) 0:00:20.656 ****** 2026-01-10 14:27:04.571394 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:04.571405 | orchestrator | 2026-01-10 14:27:04.571415 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:04.571426 | orchestrator | Saturday 10 January 2026 14:27:02 +0000 (0:00:00.200) 0:00:20.856 ****** 2026-01-10 14:27:04.571437 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:04.571448 | orchestrator | 2026-01-10 14:27:04.571458 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:04.571469 | orchestrator | Saturday 10 January 2026 14:27:02 +0000 (0:00:00.190) 0:00:21.047 ****** 2026-01-10 14:27:04.571480 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:04.571490 | orchestrator | 2026-01-10 14:27:04.571501 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:04.571512 | orchestrator | Saturday 10 January 2026 14:27:03 +0000 (0:00:00.217) 0:00:21.264 ****** 2026-01-10 14:27:04.571523 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:04.571533 | orchestrator | 2026-01-10 14:27:04.571544 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:04.571555 | orchestrator | Saturday 10 January 2026 14:27:03 +0000 (0:00:00.219) 0:00:21.484 ****** 2026-01-10 14:27:04.571566 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:04.571584 | orchestrator | 2026-01-10 14:27:04.571595 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:04.571606 | orchestrator | Saturday 10 January 2026 14:27:03 +0000 (0:00:00.209) 0:00:21.694 ****** 2026-01-10 14:27:04.571617 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-10 14:27:04.571628 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-10 14:27:04.571639 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-10 14:27:04.571650 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-10 14:27:04.571661 | orchestrator | 2026-01-10 14:27:04.571672 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:04.571683 | orchestrator | Saturday 10 January 2026 14:27:04 +0000 (0:00:00.778) 0:00:22.472 ****** 2026-01-10 14:27:04.571694 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:09.876111 | orchestrator | 2026-01-10 14:27:09.876250 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:09.876271 | orchestrator | Saturday 10 January 2026 14:27:04 +0000 (0:00:00.147) 0:00:22.620 ****** 2026-01-10 14:27:09.876284 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:09.876296 | orchestrator | 2026-01-10 14:27:09.876308 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:09.876340 | orchestrator | Saturday 10 January 2026 14:27:04 +0000 (0:00:00.147) 0:00:22.768 ****** 2026-01-10 14:27:09.876352 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:09.876363 | orchestrator | 2026-01-10 14:27:09.876375 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:09.876386 | orchestrator | Saturday 10 January 2026 14:27:04 +0000 (0:00:00.145) 0:00:22.914 ****** 2026-01-10 14:27:09.876397 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:09.876408 | orchestrator | 2026-01-10 14:27:09.876419 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-10 14:27:09.876430 | orchestrator | Saturday 10 January 2026 14:27:05 +0000 (0:00:00.472) 0:00:23.386 ****** 2026-01-10 14:27:09.876441 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-01-10 14:27:09.876452 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-01-10 14:27:09.876463 | orchestrator | 2026-01-10 14:27:09.876474 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-10 14:27:09.876498 | orchestrator | Saturday 10 January 2026 14:27:05 +0000 (0:00:00.124) 0:00:23.511 ****** 2026-01-10 14:27:09.876510 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:09.876521 | orchestrator | 2026-01-10 14:27:09.876532 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-10 14:27:09.876544 | orchestrator | Saturday 10 January 2026 14:27:05 +0000 (0:00:00.110) 0:00:23.621 ****** 2026-01-10 14:27:09.876556 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:09.876569 | orchestrator | 2026-01-10 14:27:09.876582 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-10 14:27:09.876594 | orchestrator | Saturday 10 January 2026 14:27:05 +0000 (0:00:00.117) 0:00:23.739 ****** 2026-01-10 14:27:09.876607 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:09.876619 | orchestrator | 2026-01-10 14:27:09.876631 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-10 14:27:09.876643 | orchestrator | Saturday 10 January 2026 14:27:05 +0000 (0:00:00.153) 0:00:23.893 ****** 2026-01-10 14:27:09.876656 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:27:09.876670 | orchestrator | 2026-01-10 14:27:09.876682 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-10 14:27:09.876694 | orchestrator | Saturday 10 January 2026 14:27:05 +0000 (0:00:00.124) 0:00:24.018 ****** 2026-01-10 14:27:09.876707 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca'}}) 2026-01-10 14:27:09.876721 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd6926eeb-1396-512c-9972-e44f7d919ea4'}}) 2026-01-10 14:27:09.876759 | orchestrator | 2026-01-10 14:27:09.876772 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-10 14:27:09.876784 | orchestrator | Saturday 10 January 2026 14:27:06 +0000 (0:00:00.132) 0:00:24.150 ****** 2026-01-10 14:27:09.876796 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca'}})  2026-01-10 14:27:09.876811 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd6926eeb-1396-512c-9972-e44f7d919ea4'}})  2026-01-10 14:27:09.876823 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:09.876835 | orchestrator | 2026-01-10 14:27:09.876848 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-10 14:27:09.876860 | orchestrator | Saturday 10 January 2026 14:27:06 +0000 (0:00:00.131) 0:00:24.282 ****** 2026-01-10 14:27:09.876872 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca'}})  2026-01-10 14:27:09.876885 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd6926eeb-1396-512c-9972-e44f7d919ea4'}})  2026-01-10 14:27:09.876897 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:09.876909 | orchestrator | 2026-01-10 14:27:09.876919 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-10 14:27:09.876930 | orchestrator | Saturday 10 January 2026 14:27:06 +0000 (0:00:00.115) 0:00:24.397 ****** 2026-01-10 14:27:09.876941 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca'}})  2026-01-10 14:27:09.876954 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd6926eeb-1396-512c-9972-e44f7d919ea4'}})  2026-01-10 14:27:09.876965 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:09.876975 | orchestrator | 2026-01-10 14:27:09.876986 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-10 14:27:09.876997 | orchestrator | Saturday 10 January 2026 14:27:06 +0000 (0:00:00.141) 0:00:24.539 ****** 2026-01-10 14:27:09.877008 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:27:09.877019 | orchestrator | 2026-01-10 14:27:09.877030 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-10 14:27:09.877068 | orchestrator | Saturday 10 January 2026 14:27:06 +0000 (0:00:00.110) 0:00:24.649 ****** 2026-01-10 14:27:09.877082 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:27:09.877093 | orchestrator | 2026-01-10 14:27:09.877104 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-10 14:27:09.877115 | orchestrator | Saturday 10 January 2026 14:27:06 +0000 (0:00:00.096) 0:00:24.746 ****** 2026-01-10 14:27:09.877145 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:09.877157 | orchestrator | 2026-01-10 14:27:09.877168 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-10 14:27:09.877179 | orchestrator | Saturday 10 January 2026 14:27:06 +0000 (0:00:00.242) 0:00:24.988 ****** 2026-01-10 14:27:09.877190 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:09.877201 | orchestrator | 2026-01-10 14:27:09.877212 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-10 14:27:09.877222 | orchestrator | Saturday 10 January 2026 14:27:07 +0000 (0:00:00.109) 0:00:25.098 ****** 2026-01-10 14:27:09.877233 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:09.877244 | orchestrator | 2026-01-10 14:27:09.877255 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-10 14:27:09.877266 | orchestrator | Saturday 10 January 2026 14:27:07 +0000 (0:00:00.115) 0:00:25.213 ****** 2026-01-10 14:27:09.877277 | orchestrator | ok: [testbed-node-4] => { 2026-01-10 14:27:09.877288 | orchestrator |  "ceph_osd_devices": { 2026-01-10 14:27:09.877299 | orchestrator |  "sdb": { 2026-01-10 14:27:09.877311 | orchestrator |  "osd_lvm_uuid": "8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca" 2026-01-10 14:27:09.877322 | orchestrator |  }, 2026-01-10 14:27:09.877342 | orchestrator |  "sdc": { 2026-01-10 14:27:09.877360 | orchestrator |  "osd_lvm_uuid": "d6926eeb-1396-512c-9972-e44f7d919ea4" 2026-01-10 14:27:09.877372 | orchestrator |  } 2026-01-10 14:27:09.877383 | orchestrator |  } 2026-01-10 14:27:09.877394 | orchestrator | } 2026-01-10 14:27:09.877405 | orchestrator | 2026-01-10 14:27:09.877416 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-10 14:27:09.877427 | orchestrator | Saturday 10 January 2026 14:27:07 +0000 (0:00:00.126) 0:00:25.339 ****** 2026-01-10 14:27:09.877438 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:09.877450 | orchestrator | 2026-01-10 14:27:09.877460 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-10 14:27:09.877471 | orchestrator | Saturday 10 January 2026 14:27:07 +0000 (0:00:00.119) 0:00:25.459 ****** 2026-01-10 14:27:09.877482 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:09.877493 | orchestrator | 2026-01-10 14:27:09.877504 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-10 14:27:09.877515 | orchestrator | Saturday 10 January 2026 14:27:07 +0000 (0:00:00.111) 0:00:25.571 ****** 2026-01-10 14:27:09.877526 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:09.877537 | orchestrator | 2026-01-10 14:27:09.877547 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-10 14:27:09.877558 | orchestrator | Saturday 10 January 2026 14:27:07 +0000 (0:00:00.128) 0:00:25.699 ****** 2026-01-10 14:27:09.877569 | orchestrator | changed: [testbed-node-4] => { 2026-01-10 14:27:09.877581 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-10 14:27:09.877592 | orchestrator |  "ceph_osd_devices": { 2026-01-10 14:27:09.877604 | orchestrator |  "sdb": { 2026-01-10 14:27:09.877620 | orchestrator |  "osd_lvm_uuid": "8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca" 2026-01-10 14:27:09.877631 | orchestrator |  }, 2026-01-10 14:27:09.877642 | orchestrator |  "sdc": { 2026-01-10 14:27:09.877654 | orchestrator |  "osd_lvm_uuid": "d6926eeb-1396-512c-9972-e44f7d919ea4" 2026-01-10 14:27:09.877665 | orchestrator |  } 2026-01-10 14:27:09.877676 | orchestrator |  }, 2026-01-10 14:27:09.877687 | orchestrator |  "lvm_volumes": [ 2026-01-10 14:27:09.877698 | orchestrator |  { 2026-01-10 14:27:09.877709 | orchestrator |  "data": "osd-block-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca", 2026-01-10 14:27:09.877720 | orchestrator |  "data_vg": "ceph-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca" 2026-01-10 14:27:09.877731 | orchestrator |  }, 2026-01-10 14:27:09.877742 | orchestrator |  { 2026-01-10 14:27:09.877753 | orchestrator |  "data": "osd-block-d6926eeb-1396-512c-9972-e44f7d919ea4", 2026-01-10 14:27:09.877764 | orchestrator |  "data_vg": "ceph-d6926eeb-1396-512c-9972-e44f7d919ea4" 2026-01-10 14:27:09.877774 | orchestrator |  } 2026-01-10 14:27:09.877785 | orchestrator |  ] 2026-01-10 14:27:09.877796 | orchestrator |  } 2026-01-10 14:27:09.877808 | orchestrator | } 2026-01-10 14:27:09.877819 | orchestrator | 2026-01-10 14:27:09.877830 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-10 14:27:09.877840 | orchestrator | Saturday 10 January 2026 14:27:07 +0000 (0:00:00.177) 0:00:25.877 ****** 2026-01-10 14:27:09.877852 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-10 14:27:09.877862 | orchestrator | 2026-01-10 14:27:09.877873 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-10 14:27:09.877884 | orchestrator | 2026-01-10 14:27:09.877895 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-10 14:27:09.877906 | orchestrator | Saturday 10 January 2026 14:27:08 +0000 (0:00:00.979) 0:00:26.856 ****** 2026-01-10 14:27:09.877917 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-10 14:27:09.877928 | orchestrator | 2026-01-10 14:27:09.877939 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-10 14:27:09.877957 | orchestrator | Saturday 10 January 2026 14:27:09 +0000 (0:00:00.513) 0:00:27.370 ****** 2026-01-10 14:27:09.877968 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:27:09.877979 | orchestrator | 2026-01-10 14:27:09.877990 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:09.878001 | orchestrator | Saturday 10 January 2026 14:27:09 +0000 (0:00:00.220) 0:00:27.590 ****** 2026-01-10 14:27:09.878012 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-10 14:27:09.878114 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-10 14:27:09.878126 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-10 14:27:09.878137 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-10 14:27:09.878147 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-10 14:27:09.878166 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-10 14:27:18.443826 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-10 14:27:18.443976 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-10 14:27:18.444004 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-10 14:27:18.444025 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-10 14:27:18.444044 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-10 14:27:18.444061 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-10 14:27:18.444113 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-10 14:27:18.444133 | orchestrator | 2026-01-10 14:27:18.444154 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:18.444175 | orchestrator | Saturday 10 January 2026 14:27:09 +0000 (0:00:00.330) 0:00:27.920 ****** 2026-01-10 14:27:18.444196 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:27:18.444218 | orchestrator | 2026-01-10 14:27:18.444237 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:18.444256 | orchestrator | Saturday 10 January 2026 14:27:10 +0000 (0:00:00.195) 0:00:28.116 ****** 2026-01-10 14:27:18.444274 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:27:18.444285 | orchestrator | 2026-01-10 14:27:18.444296 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:18.444307 | orchestrator | Saturday 10 January 2026 14:27:10 +0000 (0:00:00.180) 0:00:28.297 ****** 2026-01-10 14:27:18.444320 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:27:18.444332 | orchestrator | 2026-01-10 14:27:18.444345 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:18.444357 | orchestrator | Saturday 10 January 2026 14:27:10 +0000 (0:00:00.191) 0:00:28.488 ****** 2026-01-10 14:27:18.444369 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:27:18.444381 | orchestrator | 2026-01-10 14:27:18.444392 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:18.444405 | orchestrator | Saturday 10 January 2026 14:27:10 +0000 (0:00:00.209) 0:00:28.697 ****** 2026-01-10 14:27:18.444416 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:27:18.444430 | orchestrator | 2026-01-10 14:27:18.444449 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:18.444466 | orchestrator | Saturday 10 January 2026 14:27:10 +0000 (0:00:00.202) 0:00:28.900 ****** 2026-01-10 14:27:18.444485 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:27:18.444503 | orchestrator | 2026-01-10 14:27:18.444547 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:18.444568 | orchestrator | Saturday 10 January 2026 14:27:11 +0000 (0:00:00.213) 0:00:29.114 ****** 2026-01-10 14:27:18.444619 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:27:18.444641 | orchestrator | 2026-01-10 14:27:18.444661 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:18.444684 | orchestrator | Saturday 10 January 2026 14:27:11 +0000 (0:00:00.200) 0:00:29.314 ****** 2026-01-10 14:27:18.444704 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:27:18.444723 | orchestrator | 2026-01-10 14:27:18.444742 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:18.444760 | orchestrator | Saturday 10 January 2026 14:27:11 +0000 (0:00:00.211) 0:00:29.526 ****** 2026-01-10 14:27:18.444779 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456) 2026-01-10 14:27:18.444800 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456) 2026-01-10 14:27:18.444818 | orchestrator | 2026-01-10 14:27:18.444837 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:18.444854 | orchestrator | Saturday 10 January 2026 14:27:12 +0000 (0:00:00.934) 0:00:30.461 ****** 2026-01-10 14:27:18.444873 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f0ceade3-8439-47c2-ab29-dba6a2d0af37) 2026-01-10 14:27:18.444893 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f0ceade3-8439-47c2-ab29-dba6a2d0af37) 2026-01-10 14:27:18.444912 | orchestrator | 2026-01-10 14:27:18.444930 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:18.444948 | orchestrator | Saturday 10 January 2026 14:27:12 +0000 (0:00:00.493) 0:00:30.954 ****** 2026-01-10 14:27:18.444962 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7172e707-12eb-4bf8-889d-ca95993faa89) 2026-01-10 14:27:18.444973 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7172e707-12eb-4bf8-889d-ca95993faa89) 2026-01-10 14:27:18.444984 | orchestrator | 2026-01-10 14:27:18.444994 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:18.445005 | orchestrator | Saturday 10 January 2026 14:27:13 +0000 (0:00:00.481) 0:00:31.436 ****** 2026-01-10 14:27:18.445016 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_573345f2-5167-4bb0-bd40-d392a39279fc) 2026-01-10 14:27:18.445026 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_573345f2-5167-4bb0-bd40-d392a39279fc) 2026-01-10 14:27:18.445037 | orchestrator | 2026-01-10 14:27:18.445047 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:18.445058 | orchestrator | Saturday 10 January 2026 14:27:13 +0000 (0:00:00.431) 0:00:31.868 ****** 2026-01-10 14:27:18.445094 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-10 14:27:18.445113 | orchestrator | 2026-01-10 14:27:18.445131 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:18.445178 | orchestrator | Saturday 10 January 2026 14:27:14 +0000 (0:00:00.332) 0:00:32.200 ****** 2026-01-10 14:27:18.445198 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-10 14:27:18.445217 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-10 14:27:18.445236 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-10 14:27:18.445255 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-10 14:27:18.445266 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-10 14:27:18.445276 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-10 14:27:18.445287 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-10 14:27:18.445299 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-10 14:27:18.445325 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-10 14:27:18.445343 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-10 14:27:18.445362 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-10 14:27:18.445380 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-10 14:27:18.445399 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-10 14:27:18.445417 | orchestrator | 2026-01-10 14:27:18.445436 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:18.445448 | orchestrator | Saturday 10 January 2026 14:27:14 +0000 (0:00:00.514) 0:00:32.715 ****** 2026-01-10 14:27:18.445459 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:27:18.445469 | orchestrator | 2026-01-10 14:27:18.445480 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:18.445491 | orchestrator | Saturday 10 January 2026 14:27:14 +0000 (0:00:00.253) 0:00:32.968 ****** 2026-01-10 14:27:18.445501 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:27:18.445512 | orchestrator | 2026-01-10 14:27:18.445522 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:18.445533 | orchestrator | Saturday 10 January 2026 14:27:15 +0000 (0:00:00.219) 0:00:33.188 ****** 2026-01-10 14:27:18.445543 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:27:18.445554 | orchestrator | 2026-01-10 14:27:18.445565 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:18.445575 | orchestrator | Saturday 10 January 2026 14:27:15 +0000 (0:00:00.227) 0:00:33.416 ****** 2026-01-10 14:27:18.445585 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:27:18.445596 | orchestrator | 2026-01-10 14:27:18.445607 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:18.445617 | orchestrator | Saturday 10 January 2026 14:27:15 +0000 (0:00:00.207) 0:00:33.624 ****** 2026-01-10 14:27:18.445628 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:27:18.445638 | orchestrator | 2026-01-10 14:27:18.445649 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:18.445659 | orchestrator | Saturday 10 January 2026 14:27:15 +0000 (0:00:00.215) 0:00:33.840 ****** 2026-01-10 14:27:18.445670 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:27:18.445680 | orchestrator | 2026-01-10 14:27:18.445691 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:18.445701 | orchestrator | Saturday 10 January 2026 14:27:16 +0000 (0:00:00.719) 0:00:34.559 ****** 2026-01-10 14:27:18.445712 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:27:18.445722 | orchestrator | 2026-01-10 14:27:18.445733 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:18.445743 | orchestrator | Saturday 10 January 2026 14:27:16 +0000 (0:00:00.236) 0:00:34.795 ****** 2026-01-10 14:27:18.445754 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:27:18.445764 | orchestrator | 2026-01-10 14:27:18.445775 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:18.445785 | orchestrator | Saturday 10 January 2026 14:27:16 +0000 (0:00:00.193) 0:00:34.989 ****** 2026-01-10 14:27:18.445796 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-10 14:27:18.445806 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-10 14:27:18.445817 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-10 14:27:18.445833 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-10 14:27:18.445851 | orchestrator | 2026-01-10 14:27:18.445870 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:18.445888 | orchestrator | Saturday 10 January 2026 14:27:17 +0000 (0:00:00.665) 0:00:35.654 ****** 2026-01-10 14:27:18.445906 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:27:18.445917 | orchestrator | 2026-01-10 14:27:18.445937 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:18.445966 | orchestrator | Saturday 10 January 2026 14:27:17 +0000 (0:00:00.196) 0:00:35.850 ****** 2026-01-10 14:27:18.445986 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:27:18.446005 | orchestrator | 2026-01-10 14:27:18.446124 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:18.446149 | orchestrator | Saturday 10 January 2026 14:27:17 +0000 (0:00:00.201) 0:00:36.052 ****** 2026-01-10 14:27:18.446169 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:27:18.446188 | orchestrator | 2026-01-10 14:27:18.446206 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:18.446226 | orchestrator | Saturday 10 January 2026 14:27:18 +0000 (0:00:00.216) 0:00:36.268 ****** 2026-01-10 14:27:18.446246 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:27:18.446267 | orchestrator | 2026-01-10 14:27:18.446304 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-10 14:27:22.484890 | orchestrator | Saturday 10 January 2026 14:27:18 +0000 (0:00:00.220) 0:00:36.489 ****** 2026-01-10 14:27:22.484981 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-01-10 14:27:22.484993 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-01-10 14:27:22.485002 | orchestrator | 2026-01-10 14:27:22.485011 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-10 14:27:22.485020 | orchestrator | Saturday 10 January 2026 14:27:18 +0000 (0:00:00.172) 0:00:36.662 ****** 2026-01-10 14:27:22.485028 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:27:22.485037 | orchestrator | 2026-01-10 14:27:22.485045 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-10 14:27:22.485053 | orchestrator | Saturday 10 January 2026 14:27:18 +0000 (0:00:00.129) 0:00:36.791 ****** 2026-01-10 14:27:22.485062 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:27:22.485070 | orchestrator | 2026-01-10 14:27:22.485095 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-10 14:27:22.485105 | orchestrator | Saturday 10 January 2026 14:27:18 +0000 (0:00:00.120) 0:00:36.911 ****** 2026-01-10 14:27:22.485114 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:27:22.485123 | orchestrator | 2026-01-10 14:27:22.485132 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-10 14:27:22.485139 | orchestrator | Saturday 10 January 2026 14:27:19 +0000 (0:00:00.373) 0:00:37.284 ****** 2026-01-10 14:27:22.485147 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:27:22.485156 | orchestrator | 2026-01-10 14:27:22.485163 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-10 14:27:22.485171 | orchestrator | Saturday 10 January 2026 14:27:19 +0000 (0:00:00.166) 0:00:37.451 ****** 2026-01-10 14:27:22.485180 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '377cb61f-8fa6-58d2-888b-072b5e96ec0c'}}) 2026-01-10 14:27:22.485188 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '82a5292d-e4f5-5675-b04e-23ddf5e1abb7'}}) 2026-01-10 14:27:22.485196 | orchestrator | 2026-01-10 14:27:22.485203 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-10 14:27:22.485211 | orchestrator | Saturday 10 January 2026 14:27:19 +0000 (0:00:00.183) 0:00:37.634 ****** 2026-01-10 14:27:22.485220 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '377cb61f-8fa6-58d2-888b-072b5e96ec0c'}})  2026-01-10 14:27:22.485246 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '82a5292d-e4f5-5675-b04e-23ddf5e1abb7'}})  2026-01-10 14:27:22.485255 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:27:22.485263 | orchestrator | 2026-01-10 14:27:22.485271 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-10 14:27:22.485279 | orchestrator | Saturday 10 January 2026 14:27:19 +0000 (0:00:00.138) 0:00:37.773 ****** 2026-01-10 14:27:22.485286 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '377cb61f-8fa6-58d2-888b-072b5e96ec0c'}})  2026-01-10 14:27:22.485316 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '82a5292d-e4f5-5675-b04e-23ddf5e1abb7'}})  2026-01-10 14:27:22.485324 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:27:22.485332 | orchestrator | 2026-01-10 14:27:22.485340 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-10 14:27:22.485347 | orchestrator | Saturday 10 January 2026 14:27:19 +0000 (0:00:00.131) 0:00:37.904 ****** 2026-01-10 14:27:22.485355 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '377cb61f-8fa6-58d2-888b-072b5e96ec0c'}})  2026-01-10 14:27:22.485363 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '82a5292d-e4f5-5675-b04e-23ddf5e1abb7'}})  2026-01-10 14:27:22.485371 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:27:22.485380 | orchestrator | 2026-01-10 14:27:22.485388 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-10 14:27:22.485396 | orchestrator | Saturday 10 January 2026 14:27:20 +0000 (0:00:00.157) 0:00:38.062 ****** 2026-01-10 14:27:22.485404 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:27:22.485412 | orchestrator | 2026-01-10 14:27:22.485421 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-10 14:27:22.485431 | orchestrator | Saturday 10 January 2026 14:27:20 +0000 (0:00:00.188) 0:00:38.250 ****** 2026-01-10 14:27:22.485445 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:27:22.485460 | orchestrator | 2026-01-10 14:27:22.485471 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-10 14:27:22.485480 | orchestrator | Saturday 10 January 2026 14:27:20 +0000 (0:00:00.139) 0:00:38.390 ****** 2026-01-10 14:27:22.485489 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:27:22.485497 | orchestrator | 2026-01-10 14:27:22.485505 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-10 14:27:22.485513 | orchestrator | Saturday 10 January 2026 14:27:20 +0000 (0:00:00.122) 0:00:38.512 ****** 2026-01-10 14:27:22.485522 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:27:22.485530 | orchestrator | 2026-01-10 14:27:22.485538 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-10 14:27:22.485546 | orchestrator | Saturday 10 January 2026 14:27:20 +0000 (0:00:00.128) 0:00:38.641 ****** 2026-01-10 14:27:22.485554 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:27:22.485562 | orchestrator | 2026-01-10 14:27:22.485570 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-10 14:27:22.485578 | orchestrator | Saturday 10 January 2026 14:27:20 +0000 (0:00:00.215) 0:00:38.856 ****** 2026-01-10 14:27:22.485587 | orchestrator | ok: [testbed-node-5] => { 2026-01-10 14:27:22.485595 | orchestrator |  "ceph_osd_devices": { 2026-01-10 14:27:22.485604 | orchestrator |  "sdb": { 2026-01-10 14:27:22.485630 | orchestrator |  "osd_lvm_uuid": "377cb61f-8fa6-58d2-888b-072b5e96ec0c" 2026-01-10 14:27:22.485640 | orchestrator |  }, 2026-01-10 14:27:22.485649 | orchestrator |  "sdc": { 2026-01-10 14:27:22.485658 | orchestrator |  "osd_lvm_uuid": "82a5292d-e4f5-5675-b04e-23ddf5e1abb7" 2026-01-10 14:27:22.485667 | orchestrator |  } 2026-01-10 14:27:22.485676 | orchestrator |  } 2026-01-10 14:27:22.485685 | orchestrator | } 2026-01-10 14:27:22.485694 | orchestrator | 2026-01-10 14:27:22.485704 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-10 14:27:22.485712 | orchestrator | Saturday 10 January 2026 14:27:20 +0000 (0:00:00.110) 0:00:38.967 ****** 2026-01-10 14:27:22.485720 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:27:22.485728 | orchestrator | 2026-01-10 14:27:22.485737 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-10 14:27:22.485745 | orchestrator | Saturday 10 January 2026 14:27:21 +0000 (0:00:00.289) 0:00:39.257 ****** 2026-01-10 14:27:22.485753 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:27:22.485837 | orchestrator | 2026-01-10 14:27:22.485845 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-10 14:27:22.485853 | orchestrator | Saturday 10 January 2026 14:27:21 +0000 (0:00:00.112) 0:00:39.370 ****** 2026-01-10 14:27:22.485861 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:27:22.485868 | orchestrator | 2026-01-10 14:27:22.485876 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-10 14:27:22.485884 | orchestrator | Saturday 10 January 2026 14:27:21 +0000 (0:00:00.112) 0:00:39.482 ****** 2026-01-10 14:27:22.485892 | orchestrator | changed: [testbed-node-5] => { 2026-01-10 14:27:22.485900 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-10 14:27:22.485908 | orchestrator |  "ceph_osd_devices": { 2026-01-10 14:27:22.485916 | orchestrator |  "sdb": { 2026-01-10 14:27:22.485924 | orchestrator |  "osd_lvm_uuid": "377cb61f-8fa6-58d2-888b-072b5e96ec0c" 2026-01-10 14:27:22.485932 | orchestrator |  }, 2026-01-10 14:27:22.485940 | orchestrator |  "sdc": { 2026-01-10 14:27:22.485948 | orchestrator |  "osd_lvm_uuid": "82a5292d-e4f5-5675-b04e-23ddf5e1abb7" 2026-01-10 14:27:22.485956 | orchestrator |  } 2026-01-10 14:27:22.485963 | orchestrator |  }, 2026-01-10 14:27:22.485971 | orchestrator |  "lvm_volumes": [ 2026-01-10 14:27:22.485979 | orchestrator |  { 2026-01-10 14:27:22.485987 | orchestrator |  "data": "osd-block-377cb61f-8fa6-58d2-888b-072b5e96ec0c", 2026-01-10 14:27:22.485996 | orchestrator |  "data_vg": "ceph-377cb61f-8fa6-58d2-888b-072b5e96ec0c" 2026-01-10 14:27:22.486004 | orchestrator |  }, 2026-01-10 14:27:22.486057 | orchestrator |  { 2026-01-10 14:27:22.486071 | orchestrator |  "data": "osd-block-82a5292d-e4f5-5675-b04e-23ddf5e1abb7", 2026-01-10 14:27:22.486112 | orchestrator |  "data_vg": "ceph-82a5292d-e4f5-5675-b04e-23ddf5e1abb7" 2026-01-10 14:27:22.486122 | orchestrator |  } 2026-01-10 14:27:22.486130 | orchestrator |  ] 2026-01-10 14:27:22.486143 | orchestrator |  } 2026-01-10 14:27:22.486152 | orchestrator | } 2026-01-10 14:27:22.486161 | orchestrator | 2026-01-10 14:27:22.486170 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-10 14:27:22.486179 | orchestrator | Saturday 10 January 2026 14:27:21 +0000 (0:00:00.187) 0:00:39.669 ****** 2026-01-10 14:27:22.486187 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-10 14:27:22.486195 | orchestrator | 2026-01-10 14:27:22.486203 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:27:22.486212 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-10 14:27:22.486222 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-10 14:27:22.486231 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-10 14:27:22.486239 | orchestrator | 2026-01-10 14:27:22.486247 | orchestrator | 2026-01-10 14:27:22.486255 | orchestrator | 2026-01-10 14:27:22.486263 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:27:22.486271 | orchestrator | Saturday 10 January 2026 14:27:22 +0000 (0:00:00.835) 0:00:40.505 ****** 2026-01-10 14:27:22.486280 | orchestrator | =============================================================================== 2026-01-10 14:27:22.486288 | orchestrator | Write configuration file ------------------------------------------------ 3.65s 2026-01-10 14:27:22.486295 | orchestrator | Add known partitions to the list of available block devices ------------- 1.24s 2026-01-10 14:27:22.486304 | orchestrator | Add known links to the list of available block devices ------------------ 1.21s 2026-01-10 14:27:22.486311 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.09s 2026-01-10 14:27:22.486328 | orchestrator | Add known links to the list of available block devices ------------------ 0.93s 2026-01-10 14:27:22.486336 | orchestrator | Add known partitions to the list of available block devices ------------- 0.85s 2026-01-10 14:27:22.486344 | orchestrator | Add known partitions to the list of available block devices ------------- 0.78s 2026-01-10 14:27:22.486352 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2026-01-10 14:27:22.486360 | orchestrator | Print configuration data ------------------------------------------------ 0.72s 2026-01-10 14:27:22.486368 | orchestrator | Get initial list of available block devices ----------------------------- 0.70s 2026-01-10 14:27:22.486375 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2026-01-10 14:27:22.486383 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.68s 2026-01-10 14:27:22.486391 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2026-01-10 14:27:22.486411 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2026-01-10 14:27:22.736385 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2026-01-10 14:27:22.736463 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2026-01-10 14:27:22.736473 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2026-01-10 14:27:22.736481 | orchestrator | Print WAL devices ------------------------------------------------------- 0.56s 2026-01-10 14:27:22.736488 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.55s 2026-01-10 14:27:22.736495 | orchestrator | Set DB devices config data ---------------------------------------------- 0.53s 2026-01-10 14:27:45.311544 | orchestrator | 2026-01-10 14:27:45 | INFO  | Task e6c7efa8-3433-49ff-84bc-a2528dd97efa (sync inventory) is running in background. Output coming soon. 2026-01-10 14:28:13.466322 | orchestrator | 2026-01-10 14:27:46 | INFO  | Starting group_vars file reorganization 2026-01-10 14:28:13.466425 | orchestrator | 2026-01-10 14:27:46 | INFO  | Moved 0 file(s) to their respective directories 2026-01-10 14:28:13.466440 | orchestrator | 2026-01-10 14:27:46 | INFO  | Group_vars file reorganization completed 2026-01-10 14:28:13.466448 | orchestrator | 2026-01-10 14:27:49 | INFO  | Starting variable preparation from inventory 2026-01-10 14:28:13.466456 | orchestrator | 2026-01-10 14:27:52 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-01-10 14:28:13.466463 | orchestrator | 2026-01-10 14:27:52 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-01-10 14:28:13.466470 | orchestrator | 2026-01-10 14:27:52 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-01-10 14:28:13.466479 | orchestrator | 2026-01-10 14:27:52 | INFO  | 3 file(s) written, 6 host(s) processed 2026-01-10 14:28:13.466487 | orchestrator | 2026-01-10 14:27:52 | INFO  | Variable preparation completed 2026-01-10 14:28:13.466494 | orchestrator | 2026-01-10 14:27:54 | INFO  | Starting inventory overwrite handling 2026-01-10 14:28:13.466501 | orchestrator | 2026-01-10 14:27:54 | INFO  | Handling group overwrites in 99-overwrite 2026-01-10 14:28:13.466507 | orchestrator | 2026-01-10 14:27:54 | INFO  | Removing group frr:children from 60-generic 2026-01-10 14:28:13.466514 | orchestrator | 2026-01-10 14:27:54 | INFO  | Removing group netbird:children from 50-infrastructure 2026-01-10 14:28:13.466520 | orchestrator | 2026-01-10 14:27:54 | INFO  | Removing group ceph-mds from 50-ceph 2026-01-10 14:28:13.466526 | orchestrator | 2026-01-10 14:27:54 | INFO  | Removing group ceph-rgw from 50-ceph 2026-01-10 14:28:13.466533 | orchestrator | 2026-01-10 14:27:54 | INFO  | Handling group overwrites in 20-roles 2026-01-10 14:28:13.466539 | orchestrator | 2026-01-10 14:27:54 | INFO  | Removing group k3s_node from 50-infrastructure 2026-01-10 14:28:13.466569 | orchestrator | 2026-01-10 14:27:54 | INFO  | Removed 5 group(s) in total 2026-01-10 14:28:13.466573 | orchestrator | 2026-01-10 14:27:54 | INFO  | Inventory overwrite handling completed 2026-01-10 14:28:13.466577 | orchestrator | 2026-01-10 14:27:55 | INFO  | Starting merge of inventory files 2026-01-10 14:28:13.466581 | orchestrator | 2026-01-10 14:27:55 | INFO  | Inventory files merged successfully 2026-01-10 14:28:13.466584 | orchestrator | 2026-01-10 14:28:01 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-01-10 14:28:13.466588 | orchestrator | 2026-01-10 14:28:12 | INFO  | Successfully wrote ClusterShell configuration 2026-01-10 14:28:13.466592 | orchestrator | [master df881f1] 2026-01-10-14-28 2026-01-10 14:28:13.466597 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-01-10 14:28:15.845515 | orchestrator | 2026-01-10 14:28:15 | INFO  | Task fed116fb-1814-4beb-85cc-2954f8ea3d79 (ceph-create-lvm-devices) was prepared for execution. 2026-01-10 14:28:15.845589 | orchestrator | 2026-01-10 14:28:15 | INFO  | It takes a moment until task fed116fb-1814-4beb-85cc-2954f8ea3d79 (ceph-create-lvm-devices) has been started and output is visible here. 2026-01-10 14:28:27.996496 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-10 14:28:27.996631 | orchestrator | 2.16.14 2026-01-10 14:28:27.996647 | orchestrator | 2026-01-10 14:28:27.996659 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-10 14:28:27.996670 | orchestrator | 2026-01-10 14:28:27.996681 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-10 14:28:27.996691 | orchestrator | Saturday 10 January 2026 14:28:20 +0000 (0:00:00.350) 0:00:00.350 ****** 2026-01-10 14:28:27.996702 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-10 14:28:27.996712 | orchestrator | 2026-01-10 14:28:27.996722 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-10 14:28:27.996732 | orchestrator | Saturday 10 January 2026 14:28:20 +0000 (0:00:00.243) 0:00:00.593 ****** 2026-01-10 14:28:27.996742 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:28:27.996752 | orchestrator | 2026-01-10 14:28:27.996762 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:28:27.996772 | orchestrator | Saturday 10 January 2026 14:28:20 +0000 (0:00:00.251) 0:00:00.845 ****** 2026-01-10 14:28:27.996782 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-10 14:28:27.996792 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-10 14:28:27.996801 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-10 14:28:27.996811 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-10 14:28:27.996821 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-10 14:28:27.996830 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-10 14:28:27.996840 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-10 14:28:27.996850 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-10 14:28:27.996860 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-10 14:28:27.996895 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-10 14:28:27.996905 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-10 14:28:27.996915 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-10 14:28:27.996925 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-10 14:28:27.996960 | orchestrator | 2026-01-10 14:28:27.996971 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:28:27.996980 | orchestrator | Saturday 10 January 2026 14:28:21 +0000 (0:00:00.516) 0:00:01.362 ****** 2026-01-10 14:28:27.996990 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:27.997002 | orchestrator | 2026-01-10 14:28:27.997013 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:28:27.997024 | orchestrator | Saturday 10 January 2026 14:28:21 +0000 (0:00:00.214) 0:00:01.577 ****** 2026-01-10 14:28:27.997034 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:27.997045 | orchestrator | 2026-01-10 14:28:27.997056 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:28:27.997072 | orchestrator | Saturday 10 January 2026 14:28:21 +0000 (0:00:00.208) 0:00:01.785 ****** 2026-01-10 14:28:27.997083 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:27.997094 | orchestrator | 2026-01-10 14:28:27.997105 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:28:27.997116 | orchestrator | Saturday 10 January 2026 14:28:21 +0000 (0:00:00.229) 0:00:02.015 ****** 2026-01-10 14:28:27.997126 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:27.997137 | orchestrator | 2026-01-10 14:28:27.997148 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:28:27.997158 | orchestrator | Saturday 10 January 2026 14:28:22 +0000 (0:00:00.218) 0:00:02.233 ****** 2026-01-10 14:28:27.997170 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:27.997181 | orchestrator | 2026-01-10 14:28:27.997191 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:28:27.997202 | orchestrator | Saturday 10 January 2026 14:28:22 +0000 (0:00:00.206) 0:00:02.440 ****** 2026-01-10 14:28:27.997213 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:27.997224 | orchestrator | 2026-01-10 14:28:27.997235 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:28:27.997247 | orchestrator | Saturday 10 January 2026 14:28:22 +0000 (0:00:00.218) 0:00:02.659 ****** 2026-01-10 14:28:27.997257 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:27.997291 | orchestrator | 2026-01-10 14:28:27.997302 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:28:27.997313 | orchestrator | Saturday 10 January 2026 14:28:22 +0000 (0:00:00.207) 0:00:02.867 ****** 2026-01-10 14:28:27.997324 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:27.997335 | orchestrator | 2026-01-10 14:28:27.997346 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:28:27.997356 | orchestrator | Saturday 10 January 2026 14:28:23 +0000 (0:00:00.203) 0:00:03.071 ****** 2026-01-10 14:28:27.997367 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927) 2026-01-10 14:28:27.997378 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927) 2026-01-10 14:28:27.997387 | orchestrator | 2026-01-10 14:28:27.997397 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:28:27.997429 | orchestrator | Saturday 10 January 2026 14:28:23 +0000 (0:00:00.415) 0:00:03.486 ****** 2026-01-10 14:28:27.997440 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_15b7e91d-d5fd-4068-ac98-0857e3d5fdf2) 2026-01-10 14:28:27.997450 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_15b7e91d-d5fd-4068-ac98-0857e3d5fdf2) 2026-01-10 14:28:27.997460 | orchestrator | 2026-01-10 14:28:27.997469 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:28:27.997479 | orchestrator | Saturday 10 January 2026 14:28:24 +0000 (0:00:00.666) 0:00:04.153 ****** 2026-01-10 14:28:27.997489 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b1ccfdf8-8ebc-4581-b39b-71c057731eea) 2026-01-10 14:28:27.997499 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b1ccfdf8-8ebc-4581-b39b-71c057731eea) 2026-01-10 14:28:27.997518 | orchestrator | 2026-01-10 14:28:27.997527 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:28:27.997537 | orchestrator | Saturday 10 January 2026 14:28:24 +0000 (0:00:00.663) 0:00:04.816 ****** 2026-01-10 14:28:27.997547 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c3c9ac61-c03e-421c-9f43-37b1f8399a84) 2026-01-10 14:28:27.997557 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c3c9ac61-c03e-421c-9f43-37b1f8399a84) 2026-01-10 14:28:27.997566 | orchestrator | 2026-01-10 14:28:27.997576 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:28:27.997585 | orchestrator | Saturday 10 January 2026 14:28:25 +0000 (0:00:00.931) 0:00:05.748 ****** 2026-01-10 14:28:27.997595 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-10 14:28:27.997605 | orchestrator | 2026-01-10 14:28:27.997614 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:28:27.997624 | orchestrator | Saturday 10 January 2026 14:28:26 +0000 (0:00:00.360) 0:00:06.108 ****** 2026-01-10 14:28:27.997634 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-10 14:28:27.997643 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-10 14:28:27.997653 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-10 14:28:27.997662 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-10 14:28:27.997672 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-10 14:28:27.997682 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-10 14:28:27.997692 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-10 14:28:27.997701 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-10 14:28:27.997711 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-10 14:28:27.997720 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-10 14:28:27.997730 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-10 14:28:27.997740 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-10 14:28:27.997749 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-10 14:28:27.997759 | orchestrator | 2026-01-10 14:28:27.997769 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:28:27.997779 | orchestrator | Saturday 10 January 2026 14:28:26 +0000 (0:00:00.415) 0:00:06.524 ****** 2026-01-10 14:28:27.997788 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:27.997798 | orchestrator | 2026-01-10 14:28:27.997808 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:28:27.997818 | orchestrator | Saturday 10 January 2026 14:28:26 +0000 (0:00:00.222) 0:00:06.746 ****** 2026-01-10 14:28:27.997828 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:27.997838 | orchestrator | 2026-01-10 14:28:27.997847 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:28:27.997857 | orchestrator | Saturday 10 January 2026 14:28:26 +0000 (0:00:00.218) 0:00:06.965 ****** 2026-01-10 14:28:27.997866 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:27.997876 | orchestrator | 2026-01-10 14:28:27.997885 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:28:27.997895 | orchestrator | Saturday 10 January 2026 14:28:27 +0000 (0:00:00.220) 0:00:07.186 ****** 2026-01-10 14:28:27.997905 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:27.997923 | orchestrator | 2026-01-10 14:28:27.997932 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:28:27.997942 | orchestrator | Saturday 10 January 2026 14:28:27 +0000 (0:00:00.201) 0:00:07.388 ****** 2026-01-10 14:28:27.997952 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:27.997961 | orchestrator | 2026-01-10 14:28:27.997971 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:28:27.997980 | orchestrator | Saturday 10 January 2026 14:28:27 +0000 (0:00:00.207) 0:00:07.595 ****** 2026-01-10 14:28:27.997990 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:27.998000 | orchestrator | 2026-01-10 14:28:27.998009 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:28:27.998093 | orchestrator | Saturday 10 January 2026 14:28:27 +0000 (0:00:00.206) 0:00:07.802 ****** 2026-01-10 14:28:27.998103 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:27.998113 | orchestrator | 2026-01-10 14:28:27.998128 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:28:36.981363 | orchestrator | Saturday 10 January 2026 14:28:27 +0000 (0:00:00.246) 0:00:08.048 ****** 2026-01-10 14:28:36.981456 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:36.981463 | orchestrator | 2026-01-10 14:28:36.981469 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:28:36.981473 | orchestrator | Saturday 10 January 2026 14:28:28 +0000 (0:00:00.235) 0:00:08.284 ****** 2026-01-10 14:28:36.981477 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-10 14:28:36.981482 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-10 14:28:36.981487 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-10 14:28:36.981490 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-10 14:28:36.981494 | orchestrator | 2026-01-10 14:28:36.981498 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:28:36.981502 | orchestrator | Saturday 10 January 2026 14:28:29 +0000 (0:00:01.180) 0:00:09.465 ****** 2026-01-10 14:28:36.981505 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:36.981509 | orchestrator | 2026-01-10 14:28:36.981513 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:28:36.981517 | orchestrator | Saturday 10 January 2026 14:28:29 +0000 (0:00:00.345) 0:00:09.811 ****** 2026-01-10 14:28:36.981520 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:36.981524 | orchestrator | 2026-01-10 14:28:36.981528 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:28:36.981532 | orchestrator | Saturday 10 January 2026 14:28:29 +0000 (0:00:00.236) 0:00:10.047 ****** 2026-01-10 14:28:36.981536 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:36.981540 | orchestrator | 2026-01-10 14:28:36.981543 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:28:36.981547 | orchestrator | Saturday 10 January 2026 14:28:30 +0000 (0:00:00.230) 0:00:10.278 ****** 2026-01-10 14:28:36.981551 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:36.981554 | orchestrator | 2026-01-10 14:28:36.981558 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-10 14:28:36.981562 | orchestrator | Saturday 10 January 2026 14:28:30 +0000 (0:00:00.237) 0:00:10.515 ****** 2026-01-10 14:28:36.981565 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:36.981569 | orchestrator | 2026-01-10 14:28:36.981573 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-10 14:28:36.981576 | orchestrator | Saturday 10 January 2026 14:28:30 +0000 (0:00:00.153) 0:00:10.668 ****** 2026-01-10 14:28:36.981594 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'afcf3728-3a76-5607-aebb-61451d8643bd'}}) 2026-01-10 14:28:36.981598 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7d69473f-eeb6-5b22-bf27-181ed9eac77f'}}) 2026-01-10 14:28:36.981602 | orchestrator | 2026-01-10 14:28:36.981606 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-10 14:28:36.981628 | orchestrator | Saturday 10 January 2026 14:28:30 +0000 (0:00:00.186) 0:00:10.855 ****** 2026-01-10 14:28:36.981634 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-afcf3728-3a76-5607-aebb-61451d8643bd', 'data_vg': 'ceph-afcf3728-3a76-5607-aebb-61451d8643bd'}) 2026-01-10 14:28:36.981640 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-7d69473f-eeb6-5b22-bf27-181ed9eac77f', 'data_vg': 'ceph-7d69473f-eeb6-5b22-bf27-181ed9eac77f'}) 2026-01-10 14:28:36.981643 | orchestrator | 2026-01-10 14:28:36.981650 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-10 14:28:36.981654 | orchestrator | Saturday 10 January 2026 14:28:32 +0000 (0:00:02.172) 0:00:13.027 ****** 2026-01-10 14:28:36.981657 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-afcf3728-3a76-5607-aebb-61451d8643bd', 'data_vg': 'ceph-afcf3728-3a76-5607-aebb-61451d8643bd'})  2026-01-10 14:28:36.981663 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7d69473f-eeb6-5b22-bf27-181ed9eac77f', 'data_vg': 'ceph-7d69473f-eeb6-5b22-bf27-181ed9eac77f'})  2026-01-10 14:28:36.981667 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:36.981671 | orchestrator | 2026-01-10 14:28:36.981675 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-10 14:28:36.981678 | orchestrator | Saturday 10 January 2026 14:28:33 +0000 (0:00:00.164) 0:00:13.192 ****** 2026-01-10 14:28:36.981682 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-afcf3728-3a76-5607-aebb-61451d8643bd', 'data_vg': 'ceph-afcf3728-3a76-5607-aebb-61451d8643bd'}) 2026-01-10 14:28:36.981686 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-7d69473f-eeb6-5b22-bf27-181ed9eac77f', 'data_vg': 'ceph-7d69473f-eeb6-5b22-bf27-181ed9eac77f'}) 2026-01-10 14:28:36.981690 | orchestrator | 2026-01-10 14:28:36.981693 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-10 14:28:36.981697 | orchestrator | Saturday 10 January 2026 14:28:34 +0000 (0:00:01.582) 0:00:14.775 ****** 2026-01-10 14:28:36.981701 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-afcf3728-3a76-5607-aebb-61451d8643bd', 'data_vg': 'ceph-afcf3728-3a76-5607-aebb-61451d8643bd'})  2026-01-10 14:28:36.981705 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7d69473f-eeb6-5b22-bf27-181ed9eac77f', 'data_vg': 'ceph-7d69473f-eeb6-5b22-bf27-181ed9eac77f'})  2026-01-10 14:28:36.981709 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:36.981712 | orchestrator | 2026-01-10 14:28:36.981716 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-10 14:28:36.981720 | orchestrator | Saturday 10 January 2026 14:28:34 +0000 (0:00:00.171) 0:00:14.946 ****** 2026-01-10 14:28:36.981735 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:36.981739 | orchestrator | 2026-01-10 14:28:36.981743 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-10 14:28:36.981747 | orchestrator | Saturday 10 January 2026 14:28:35 +0000 (0:00:00.176) 0:00:15.122 ****** 2026-01-10 14:28:36.981750 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-afcf3728-3a76-5607-aebb-61451d8643bd', 'data_vg': 'ceph-afcf3728-3a76-5607-aebb-61451d8643bd'})  2026-01-10 14:28:36.981754 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7d69473f-eeb6-5b22-bf27-181ed9eac77f', 'data_vg': 'ceph-7d69473f-eeb6-5b22-bf27-181ed9eac77f'})  2026-01-10 14:28:36.981758 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:36.981762 | orchestrator | 2026-01-10 14:28:36.981765 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-10 14:28:36.981769 | orchestrator | Saturday 10 January 2026 14:28:35 +0000 (0:00:00.391) 0:00:15.514 ****** 2026-01-10 14:28:36.981773 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:36.981776 | orchestrator | 2026-01-10 14:28:36.981780 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-10 14:28:36.981784 | orchestrator | Saturday 10 January 2026 14:28:35 +0000 (0:00:00.134) 0:00:15.648 ****** 2026-01-10 14:28:36.981792 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-afcf3728-3a76-5607-aebb-61451d8643bd', 'data_vg': 'ceph-afcf3728-3a76-5607-aebb-61451d8643bd'})  2026-01-10 14:28:36.981796 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7d69473f-eeb6-5b22-bf27-181ed9eac77f', 'data_vg': 'ceph-7d69473f-eeb6-5b22-bf27-181ed9eac77f'})  2026-01-10 14:28:36.981800 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:36.981804 | orchestrator | 2026-01-10 14:28:36.981807 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-10 14:28:36.981811 | orchestrator | Saturday 10 January 2026 14:28:35 +0000 (0:00:00.181) 0:00:15.830 ****** 2026-01-10 14:28:36.981815 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:36.981818 | orchestrator | 2026-01-10 14:28:36.981822 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-10 14:28:36.981826 | orchestrator | Saturday 10 January 2026 14:28:35 +0000 (0:00:00.157) 0:00:15.987 ****** 2026-01-10 14:28:36.981832 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-afcf3728-3a76-5607-aebb-61451d8643bd', 'data_vg': 'ceph-afcf3728-3a76-5607-aebb-61451d8643bd'})  2026-01-10 14:28:36.981838 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7d69473f-eeb6-5b22-bf27-181ed9eac77f', 'data_vg': 'ceph-7d69473f-eeb6-5b22-bf27-181ed9eac77f'})  2026-01-10 14:28:36.981844 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:36.981850 | orchestrator | 2026-01-10 14:28:36.981856 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-10 14:28:36.981863 | orchestrator | Saturday 10 January 2026 14:28:36 +0000 (0:00:00.165) 0:00:16.153 ****** 2026-01-10 14:28:36.981869 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:28:36.981875 | orchestrator | 2026-01-10 14:28:36.981881 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-10 14:28:36.981887 | orchestrator | Saturday 10 January 2026 14:28:36 +0000 (0:00:00.142) 0:00:16.295 ****** 2026-01-10 14:28:36.981896 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-afcf3728-3a76-5607-aebb-61451d8643bd', 'data_vg': 'ceph-afcf3728-3a76-5607-aebb-61451d8643bd'})  2026-01-10 14:28:36.981902 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7d69473f-eeb6-5b22-bf27-181ed9eac77f', 'data_vg': 'ceph-7d69473f-eeb6-5b22-bf27-181ed9eac77f'})  2026-01-10 14:28:36.981908 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:36.981913 | orchestrator | 2026-01-10 14:28:36.981918 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-10 14:28:36.981924 | orchestrator | Saturday 10 January 2026 14:28:36 +0000 (0:00:00.186) 0:00:16.482 ****** 2026-01-10 14:28:36.981930 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-afcf3728-3a76-5607-aebb-61451d8643bd', 'data_vg': 'ceph-afcf3728-3a76-5607-aebb-61451d8643bd'})  2026-01-10 14:28:36.981936 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7d69473f-eeb6-5b22-bf27-181ed9eac77f', 'data_vg': 'ceph-7d69473f-eeb6-5b22-bf27-181ed9eac77f'})  2026-01-10 14:28:36.981941 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:36.981948 | orchestrator | 2026-01-10 14:28:36.981953 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-10 14:28:36.981960 | orchestrator | Saturday 10 January 2026 14:28:36 +0000 (0:00:00.215) 0:00:16.697 ****** 2026-01-10 14:28:36.981966 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-afcf3728-3a76-5607-aebb-61451d8643bd', 'data_vg': 'ceph-afcf3728-3a76-5607-aebb-61451d8643bd'})  2026-01-10 14:28:36.981972 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7d69473f-eeb6-5b22-bf27-181ed9eac77f', 'data_vg': 'ceph-7d69473f-eeb6-5b22-bf27-181ed9eac77f'})  2026-01-10 14:28:36.981978 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:36.981984 | orchestrator | 2026-01-10 14:28:36.981990 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-10 14:28:36.981996 | orchestrator | Saturday 10 January 2026 14:28:36 +0000 (0:00:00.173) 0:00:16.871 ****** 2026-01-10 14:28:36.982009 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:36.982053 | orchestrator | 2026-01-10 14:28:36.982061 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-10 14:28:36.982075 | orchestrator | Saturday 10 January 2026 14:28:36 +0000 (0:00:00.162) 0:00:17.034 ****** 2026-01-10 14:28:44.158253 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:44.158411 | orchestrator | 2026-01-10 14:28:44.158421 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-10 14:28:44.158429 | orchestrator | Saturday 10 January 2026 14:28:37 +0000 (0:00:00.160) 0:00:17.194 ****** 2026-01-10 14:28:44.158488 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:44.158493 | orchestrator | 2026-01-10 14:28:44.158497 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-10 14:28:44.158501 | orchestrator | Saturday 10 January 2026 14:28:37 +0000 (0:00:00.141) 0:00:17.336 ****** 2026-01-10 14:28:44.158505 | orchestrator | ok: [testbed-node-3] => { 2026-01-10 14:28:44.158509 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-10 14:28:44.158513 | orchestrator | } 2026-01-10 14:28:44.158518 | orchestrator | 2026-01-10 14:28:44.158522 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-10 14:28:44.158556 | orchestrator | Saturday 10 January 2026 14:28:37 +0000 (0:00:00.368) 0:00:17.704 ****** 2026-01-10 14:28:44.158560 | orchestrator | ok: [testbed-node-3] => { 2026-01-10 14:28:44.158564 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-10 14:28:44.158567 | orchestrator | } 2026-01-10 14:28:44.158571 | orchestrator | 2026-01-10 14:28:44.158575 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-10 14:28:44.158579 | orchestrator | Saturday 10 January 2026 14:28:37 +0000 (0:00:00.175) 0:00:17.880 ****** 2026-01-10 14:28:44.158583 | orchestrator | ok: [testbed-node-3] => { 2026-01-10 14:28:44.158587 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-10 14:28:44.158591 | orchestrator | } 2026-01-10 14:28:44.158595 | orchestrator | 2026-01-10 14:28:44.158599 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-10 14:28:44.158603 | orchestrator | Saturday 10 January 2026 14:28:37 +0000 (0:00:00.174) 0:00:18.054 ****** 2026-01-10 14:28:44.158607 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:28:44.158610 | orchestrator | 2026-01-10 14:28:44.158614 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-10 14:28:44.158619 | orchestrator | Saturday 10 January 2026 14:28:38 +0000 (0:00:00.747) 0:00:18.802 ****** 2026-01-10 14:28:44.158622 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:28:44.158626 | orchestrator | 2026-01-10 14:28:44.158630 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-10 14:28:44.158634 | orchestrator | Saturday 10 January 2026 14:28:39 +0000 (0:00:00.571) 0:00:19.374 ****** 2026-01-10 14:28:44.158638 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:28:44.158641 | orchestrator | 2026-01-10 14:28:44.158645 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-10 14:28:44.158649 | orchestrator | Saturday 10 January 2026 14:28:39 +0000 (0:00:00.589) 0:00:19.964 ****** 2026-01-10 14:28:44.158653 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:28:44.158657 | orchestrator | 2026-01-10 14:28:44.158661 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-10 14:28:44.158665 | orchestrator | Saturday 10 January 2026 14:28:40 +0000 (0:00:00.144) 0:00:20.108 ****** 2026-01-10 14:28:44.158669 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:44.158673 | orchestrator | 2026-01-10 14:28:44.158677 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-10 14:28:44.158681 | orchestrator | Saturday 10 January 2026 14:28:40 +0000 (0:00:00.141) 0:00:20.250 ****** 2026-01-10 14:28:44.158685 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:44.158688 | orchestrator | 2026-01-10 14:28:44.158692 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-10 14:28:44.158717 | orchestrator | Saturday 10 January 2026 14:28:40 +0000 (0:00:00.139) 0:00:20.389 ****** 2026-01-10 14:28:44.158722 | orchestrator | ok: [testbed-node-3] => { 2026-01-10 14:28:44.158726 | orchestrator |  "vgs_report": { 2026-01-10 14:28:44.158730 | orchestrator |  "vg": [] 2026-01-10 14:28:44.158734 | orchestrator |  } 2026-01-10 14:28:44.158738 | orchestrator | } 2026-01-10 14:28:44.158742 | orchestrator | 2026-01-10 14:28:44.158746 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-10 14:28:44.158750 | orchestrator | Saturday 10 January 2026 14:28:40 +0000 (0:00:00.150) 0:00:20.539 ****** 2026-01-10 14:28:44.158754 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:44.158757 | orchestrator | 2026-01-10 14:28:44.158773 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-10 14:28:44.158777 | orchestrator | Saturday 10 January 2026 14:28:40 +0000 (0:00:00.133) 0:00:20.673 ****** 2026-01-10 14:28:44.158782 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:44.158813 | orchestrator | 2026-01-10 14:28:44.158820 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-10 14:28:44.158828 | orchestrator | Saturday 10 January 2026 14:28:40 +0000 (0:00:00.165) 0:00:20.839 ****** 2026-01-10 14:28:44.158836 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:44.158846 | orchestrator | 2026-01-10 14:28:44.158851 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-10 14:28:44.158894 | orchestrator | Saturday 10 January 2026 14:28:41 +0000 (0:00:00.375) 0:00:21.214 ****** 2026-01-10 14:28:44.158902 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:44.158907 | orchestrator | 2026-01-10 14:28:44.158914 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-10 14:28:44.158938 | orchestrator | Saturday 10 January 2026 14:28:41 +0000 (0:00:00.151) 0:00:21.366 ****** 2026-01-10 14:28:44.158945 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:44.158951 | orchestrator | 2026-01-10 14:28:44.158958 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-10 14:28:44.159001 | orchestrator | Saturday 10 January 2026 14:28:41 +0000 (0:00:00.163) 0:00:21.530 ****** 2026-01-10 14:28:44.159007 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:44.159012 | orchestrator | 2026-01-10 14:28:44.159019 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-10 14:28:44.159025 | orchestrator | Saturday 10 January 2026 14:28:41 +0000 (0:00:00.153) 0:00:21.684 ****** 2026-01-10 14:28:44.159031 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:44.159036 | orchestrator | 2026-01-10 14:28:44.159043 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-10 14:28:44.159049 | orchestrator | Saturday 10 January 2026 14:28:41 +0000 (0:00:00.183) 0:00:21.867 ****** 2026-01-10 14:28:44.159072 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:44.159078 | orchestrator | 2026-01-10 14:28:44.159085 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-10 14:28:44.159091 | orchestrator | Saturday 10 January 2026 14:28:41 +0000 (0:00:00.154) 0:00:22.022 ****** 2026-01-10 14:28:44.159097 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:44.159104 | orchestrator | 2026-01-10 14:28:44.159110 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-10 14:28:44.159116 | orchestrator | Saturday 10 January 2026 14:28:42 +0000 (0:00:00.157) 0:00:22.179 ****** 2026-01-10 14:28:44.159123 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:44.159129 | orchestrator | 2026-01-10 14:28:44.159135 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-10 14:28:44.159141 | orchestrator | Saturday 10 January 2026 14:28:42 +0000 (0:00:00.130) 0:00:22.310 ****** 2026-01-10 14:28:44.159147 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:44.159154 | orchestrator | 2026-01-10 14:28:44.159160 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-10 14:28:44.159166 | orchestrator | Saturday 10 January 2026 14:28:42 +0000 (0:00:00.152) 0:00:22.463 ****** 2026-01-10 14:28:44.159181 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:44.159188 | orchestrator | 2026-01-10 14:28:44.159194 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-10 14:28:44.159200 | orchestrator | Saturday 10 January 2026 14:28:42 +0000 (0:00:00.154) 0:00:22.618 ****** 2026-01-10 14:28:44.159207 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:44.159213 | orchestrator | 2026-01-10 14:28:44.159219 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-10 14:28:44.159226 | orchestrator | Saturday 10 January 2026 14:28:42 +0000 (0:00:00.144) 0:00:22.762 ****** 2026-01-10 14:28:44.159232 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:44.159239 | orchestrator | 2026-01-10 14:28:44.159245 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-10 14:28:44.159251 | orchestrator | Saturday 10 January 2026 14:28:42 +0000 (0:00:00.146) 0:00:22.908 ****** 2026-01-10 14:28:44.159259 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-afcf3728-3a76-5607-aebb-61451d8643bd', 'data_vg': 'ceph-afcf3728-3a76-5607-aebb-61451d8643bd'})  2026-01-10 14:28:44.159267 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7d69473f-eeb6-5b22-bf27-181ed9eac77f', 'data_vg': 'ceph-7d69473f-eeb6-5b22-bf27-181ed9eac77f'})  2026-01-10 14:28:44.159273 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:44.159279 | orchestrator | 2026-01-10 14:28:44.159285 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-10 14:28:44.159291 | orchestrator | Saturday 10 January 2026 14:28:43 +0000 (0:00:00.380) 0:00:23.289 ****** 2026-01-10 14:28:44.159297 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-afcf3728-3a76-5607-aebb-61451d8643bd', 'data_vg': 'ceph-afcf3728-3a76-5607-aebb-61451d8643bd'})  2026-01-10 14:28:44.159400 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7d69473f-eeb6-5b22-bf27-181ed9eac77f', 'data_vg': 'ceph-7d69473f-eeb6-5b22-bf27-181ed9eac77f'})  2026-01-10 14:28:44.159409 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:44.159416 | orchestrator | 2026-01-10 14:28:44.159422 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-10 14:28:44.159436 | orchestrator | Saturday 10 January 2026 14:28:43 +0000 (0:00:00.181) 0:00:23.470 ****** 2026-01-10 14:28:44.159443 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-afcf3728-3a76-5607-aebb-61451d8643bd', 'data_vg': 'ceph-afcf3728-3a76-5607-aebb-61451d8643bd'})  2026-01-10 14:28:44.159449 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7d69473f-eeb6-5b22-bf27-181ed9eac77f', 'data_vg': 'ceph-7d69473f-eeb6-5b22-bf27-181ed9eac77f'})  2026-01-10 14:28:44.159456 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:44.159462 | orchestrator | 2026-01-10 14:28:44.159469 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-10 14:28:44.159475 | orchestrator | Saturday 10 January 2026 14:28:43 +0000 (0:00:00.182) 0:00:23.652 ****** 2026-01-10 14:28:44.159482 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-afcf3728-3a76-5607-aebb-61451d8643bd', 'data_vg': 'ceph-afcf3728-3a76-5607-aebb-61451d8643bd'})  2026-01-10 14:28:44.159488 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7d69473f-eeb6-5b22-bf27-181ed9eac77f', 'data_vg': 'ceph-7d69473f-eeb6-5b22-bf27-181ed9eac77f'})  2026-01-10 14:28:44.159516 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:44.159523 | orchestrator | 2026-01-10 14:28:44.159529 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-10 14:28:44.159535 | orchestrator | Saturday 10 January 2026 14:28:43 +0000 (0:00:00.199) 0:00:23.852 ****** 2026-01-10 14:28:44.159542 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-afcf3728-3a76-5607-aebb-61451d8643bd', 'data_vg': 'ceph-afcf3728-3a76-5607-aebb-61451d8643bd'})  2026-01-10 14:28:44.159548 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7d69473f-eeb6-5b22-bf27-181ed9eac77f', 'data_vg': 'ceph-7d69473f-eeb6-5b22-bf27-181ed9eac77f'})  2026-01-10 14:28:44.159595 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:44.159602 | orchestrator | 2026-01-10 14:28:44.159608 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-10 14:28:44.159615 | orchestrator | Saturday 10 January 2026 14:28:43 +0000 (0:00:00.178) 0:00:24.031 ****** 2026-01-10 14:28:44.159629 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-afcf3728-3a76-5607-aebb-61451d8643bd', 'data_vg': 'ceph-afcf3728-3a76-5607-aebb-61451d8643bd'})  2026-01-10 14:28:49.862764 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7d69473f-eeb6-5b22-bf27-181ed9eac77f', 'data_vg': 'ceph-7d69473f-eeb6-5b22-bf27-181ed9eac77f'})  2026-01-10 14:28:49.862828 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:49.862838 | orchestrator | 2026-01-10 14:28:49.862847 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-10 14:28:49.862855 | orchestrator | Saturday 10 January 2026 14:28:44 +0000 (0:00:00.184) 0:00:24.216 ****** 2026-01-10 14:28:49.862862 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-afcf3728-3a76-5607-aebb-61451d8643bd', 'data_vg': 'ceph-afcf3728-3a76-5607-aebb-61451d8643bd'})  2026-01-10 14:28:49.862870 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7d69473f-eeb6-5b22-bf27-181ed9eac77f', 'data_vg': 'ceph-7d69473f-eeb6-5b22-bf27-181ed9eac77f'})  2026-01-10 14:28:49.862877 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:49.862884 | orchestrator | 2026-01-10 14:28:49.862892 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-10 14:28:49.862899 | orchestrator | Saturday 10 January 2026 14:28:44 +0000 (0:00:00.175) 0:00:24.391 ****** 2026-01-10 14:28:49.862906 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-afcf3728-3a76-5607-aebb-61451d8643bd', 'data_vg': 'ceph-afcf3728-3a76-5607-aebb-61451d8643bd'})  2026-01-10 14:28:49.862913 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7d69473f-eeb6-5b22-bf27-181ed9eac77f', 'data_vg': 'ceph-7d69473f-eeb6-5b22-bf27-181ed9eac77f'})  2026-01-10 14:28:49.862920 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:49.862928 | orchestrator | 2026-01-10 14:28:49.862935 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-10 14:28:49.862941 | orchestrator | Saturday 10 January 2026 14:28:44 +0000 (0:00:00.162) 0:00:24.554 ****** 2026-01-10 14:28:49.862948 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:28:49.862955 | orchestrator | 2026-01-10 14:28:49.862961 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-10 14:28:49.862969 | orchestrator | Saturday 10 January 2026 14:28:45 +0000 (0:00:00.571) 0:00:25.126 ****** 2026-01-10 14:28:49.862976 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:28:49.862983 | orchestrator | 2026-01-10 14:28:49.862990 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-10 14:28:49.862996 | orchestrator | Saturday 10 January 2026 14:28:45 +0000 (0:00:00.502) 0:00:25.628 ****** 2026-01-10 14:28:49.863003 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:28:49.863010 | orchestrator | 2026-01-10 14:28:49.863017 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-10 14:28:49.863024 | orchestrator | Saturday 10 January 2026 14:28:45 +0000 (0:00:00.188) 0:00:25.817 ****** 2026-01-10 14:28:49.863030 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-7d69473f-eeb6-5b22-bf27-181ed9eac77f', 'vg_name': 'ceph-7d69473f-eeb6-5b22-bf27-181ed9eac77f'}) 2026-01-10 14:28:49.863038 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-afcf3728-3a76-5607-aebb-61451d8643bd', 'vg_name': 'ceph-afcf3728-3a76-5607-aebb-61451d8643bd'}) 2026-01-10 14:28:49.863045 | orchestrator | 2026-01-10 14:28:49.863051 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-10 14:28:49.863058 | orchestrator | Saturday 10 January 2026 14:28:45 +0000 (0:00:00.171) 0:00:25.988 ****** 2026-01-10 14:28:49.863065 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-afcf3728-3a76-5607-aebb-61451d8643bd', 'data_vg': 'ceph-afcf3728-3a76-5607-aebb-61451d8643bd'})  2026-01-10 14:28:49.863087 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7d69473f-eeb6-5b22-bf27-181ed9eac77f', 'data_vg': 'ceph-7d69473f-eeb6-5b22-bf27-181ed9eac77f'})  2026-01-10 14:28:49.863094 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:49.863100 | orchestrator | 2026-01-10 14:28:49.863107 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-10 14:28:49.863114 | orchestrator | Saturday 10 January 2026 14:28:46 +0000 (0:00:00.441) 0:00:26.430 ****** 2026-01-10 14:28:49.863121 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-afcf3728-3a76-5607-aebb-61451d8643bd', 'data_vg': 'ceph-afcf3728-3a76-5607-aebb-61451d8643bd'})  2026-01-10 14:28:49.863128 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7d69473f-eeb6-5b22-bf27-181ed9eac77f', 'data_vg': 'ceph-7d69473f-eeb6-5b22-bf27-181ed9eac77f'})  2026-01-10 14:28:49.863135 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:49.863142 | orchestrator | 2026-01-10 14:28:49.863149 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-10 14:28:49.863156 | orchestrator | Saturday 10 January 2026 14:28:46 +0000 (0:00:00.174) 0:00:26.605 ****** 2026-01-10 14:28:49.863163 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-afcf3728-3a76-5607-aebb-61451d8643bd', 'data_vg': 'ceph-afcf3728-3a76-5607-aebb-61451d8643bd'})  2026-01-10 14:28:49.863170 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7d69473f-eeb6-5b22-bf27-181ed9eac77f', 'data_vg': 'ceph-7d69473f-eeb6-5b22-bf27-181ed9eac77f'})  2026-01-10 14:28:49.863177 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:49.863184 | orchestrator | 2026-01-10 14:28:49.863190 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-10 14:28:49.863197 | orchestrator | Saturday 10 January 2026 14:28:46 +0000 (0:00:00.161) 0:00:26.766 ****** 2026-01-10 14:28:49.863214 | orchestrator | ok: [testbed-node-3] => { 2026-01-10 14:28:49.863220 | orchestrator |  "lvm_report": { 2026-01-10 14:28:49.863227 | orchestrator |  "lv": [ 2026-01-10 14:28:49.863234 | orchestrator |  { 2026-01-10 14:28:49.863240 | orchestrator |  "lv_name": "osd-block-7d69473f-eeb6-5b22-bf27-181ed9eac77f", 2026-01-10 14:28:49.863248 | orchestrator |  "vg_name": "ceph-7d69473f-eeb6-5b22-bf27-181ed9eac77f" 2026-01-10 14:28:49.863255 | orchestrator |  }, 2026-01-10 14:28:49.863261 | orchestrator |  { 2026-01-10 14:28:49.863268 | orchestrator |  "lv_name": "osd-block-afcf3728-3a76-5607-aebb-61451d8643bd", 2026-01-10 14:28:49.863275 | orchestrator |  "vg_name": "ceph-afcf3728-3a76-5607-aebb-61451d8643bd" 2026-01-10 14:28:49.863281 | orchestrator |  } 2026-01-10 14:28:49.863288 | orchestrator |  ], 2026-01-10 14:28:49.863295 | orchestrator |  "pv": [ 2026-01-10 14:28:49.863302 | orchestrator |  { 2026-01-10 14:28:49.863309 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-10 14:28:49.863378 | orchestrator |  "vg_name": "ceph-afcf3728-3a76-5607-aebb-61451d8643bd" 2026-01-10 14:28:49.863389 | orchestrator |  }, 2026-01-10 14:28:49.863396 | orchestrator |  { 2026-01-10 14:28:49.863403 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-10 14:28:49.863421 | orchestrator |  "vg_name": "ceph-7d69473f-eeb6-5b22-bf27-181ed9eac77f" 2026-01-10 14:28:49.863429 | orchestrator |  } 2026-01-10 14:28:49.863437 | orchestrator |  ] 2026-01-10 14:28:49.863444 | orchestrator |  } 2026-01-10 14:28:49.863452 | orchestrator | } 2026-01-10 14:28:49.863459 | orchestrator | 2026-01-10 14:28:49.863468 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-10 14:28:49.863475 | orchestrator | 2026-01-10 14:28:49.863483 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-10 14:28:49.863490 | orchestrator | Saturday 10 January 2026 14:28:47 +0000 (0:00:00.372) 0:00:27.139 ****** 2026-01-10 14:28:49.863504 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-10 14:28:49.863511 | orchestrator | 2026-01-10 14:28:49.863518 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-10 14:28:49.863525 | orchestrator | Saturday 10 January 2026 14:28:47 +0000 (0:00:00.275) 0:00:27.415 ****** 2026-01-10 14:28:49.863532 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:28:49.863541 | orchestrator | 2026-01-10 14:28:49.863549 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:28:49.863557 | orchestrator | Saturday 10 January 2026 14:28:47 +0000 (0:00:00.274) 0:00:27.689 ****** 2026-01-10 14:28:49.863565 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-10 14:28:49.863573 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-10 14:28:49.863581 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-10 14:28:49.863588 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-10 14:28:49.863595 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-10 14:28:49.863602 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-10 14:28:49.863613 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-10 14:28:49.863620 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-10 14:28:49.863627 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-10 14:28:49.863634 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-10 14:28:49.863642 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-10 14:28:49.863649 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-10 14:28:49.863656 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-10 14:28:49.863663 | orchestrator | 2026-01-10 14:28:49.863670 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:28:49.863676 | orchestrator | Saturday 10 January 2026 14:28:48 +0000 (0:00:00.466) 0:00:28.156 ****** 2026-01-10 14:28:49.863683 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:28:49.863690 | orchestrator | 2026-01-10 14:28:49.863696 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:28:49.863703 | orchestrator | Saturday 10 January 2026 14:28:48 +0000 (0:00:00.191) 0:00:28.347 ****** 2026-01-10 14:28:49.863710 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:28:49.863717 | orchestrator | 2026-01-10 14:28:49.863724 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:28:49.863731 | orchestrator | Saturday 10 January 2026 14:28:48 +0000 (0:00:00.194) 0:00:28.542 ****** 2026-01-10 14:28:49.863738 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:28:49.863744 | orchestrator | 2026-01-10 14:28:49.863751 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:28:49.863758 | orchestrator | Saturday 10 January 2026 14:28:49 +0000 (0:00:00.682) 0:00:29.224 ****** 2026-01-10 14:28:49.863764 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:28:49.863771 | orchestrator | 2026-01-10 14:28:49.863778 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:28:49.863785 | orchestrator | Saturday 10 January 2026 14:28:49 +0000 (0:00:00.235) 0:00:29.459 ****** 2026-01-10 14:28:49.863792 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:28:49.863799 | orchestrator | 2026-01-10 14:28:49.863805 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:28:49.863812 | orchestrator | Saturday 10 January 2026 14:28:49 +0000 (0:00:00.225) 0:00:29.684 ****** 2026-01-10 14:28:49.863824 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:28:49.863831 | orchestrator | 2026-01-10 14:28:49.863845 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:29:01.774147 | orchestrator | Saturday 10 January 2026 14:28:49 +0000 (0:00:00.234) 0:00:29.919 ****** 2026-01-10 14:29:01.774245 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:01.774259 | orchestrator | 2026-01-10 14:29:01.774266 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:29:01.774271 | orchestrator | Saturday 10 January 2026 14:28:50 +0000 (0:00:00.208) 0:00:30.127 ****** 2026-01-10 14:29:01.774275 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:01.774279 | orchestrator | 2026-01-10 14:29:01.774283 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:29:01.774287 | orchestrator | Saturday 10 January 2026 14:28:50 +0000 (0:00:00.222) 0:00:30.349 ****** 2026-01-10 14:29:01.774292 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51) 2026-01-10 14:29:01.774298 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51) 2026-01-10 14:29:01.774301 | orchestrator | 2026-01-10 14:29:01.774305 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:29:01.774309 | orchestrator | Saturday 10 January 2026 14:28:50 +0000 (0:00:00.440) 0:00:30.790 ****** 2026-01-10 14:29:01.774313 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_44339de3-07bb-4d03-9d3b-2e0777e51af2) 2026-01-10 14:29:01.774320 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_44339de3-07bb-4d03-9d3b-2e0777e51af2) 2026-01-10 14:29:01.774326 | orchestrator | 2026-01-10 14:29:01.774332 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:29:01.774340 | orchestrator | Saturday 10 January 2026 14:28:51 +0000 (0:00:00.430) 0:00:31.221 ****** 2026-01-10 14:29:01.774445 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_09bcf5bb-a136-4209-bef3-37a648ec73be) 2026-01-10 14:29:01.774455 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_09bcf5bb-a136-4209-bef3-37a648ec73be) 2026-01-10 14:29:01.774459 | orchestrator | 2026-01-10 14:29:01.774463 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:29:01.774467 | orchestrator | Saturday 10 January 2026 14:28:51 +0000 (0:00:00.495) 0:00:31.716 ****** 2026-01-10 14:29:01.774471 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c56b24e3-125a-48ee-acc4-7420ce900c20) 2026-01-10 14:29:01.774476 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c56b24e3-125a-48ee-acc4-7420ce900c20) 2026-01-10 14:29:01.774479 | orchestrator | 2026-01-10 14:29:01.774483 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:29:01.774487 | orchestrator | Saturday 10 January 2026 14:28:52 +0000 (0:00:00.810) 0:00:32.527 ****** 2026-01-10 14:29:01.774491 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-10 14:29:01.774495 | orchestrator | 2026-01-10 14:29:01.774499 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:29:01.774503 | orchestrator | Saturday 10 January 2026 14:28:53 +0000 (0:00:00.688) 0:00:33.216 ****** 2026-01-10 14:29:01.774519 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-10 14:29:01.774524 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-10 14:29:01.774528 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-10 14:29:01.774532 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-10 14:29:01.774535 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-10 14:29:01.774539 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-10 14:29:01.774562 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-10 14:29:01.774566 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-10 14:29:01.774570 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-10 14:29:01.774573 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-10 14:29:01.774577 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-10 14:29:01.774581 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-10 14:29:01.774584 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-10 14:29:01.774588 | orchestrator | 2026-01-10 14:29:01.774592 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:29:01.774595 | orchestrator | Saturday 10 January 2026 14:28:54 +0000 (0:00:00.985) 0:00:34.201 ****** 2026-01-10 14:29:01.774599 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:01.774603 | orchestrator | 2026-01-10 14:29:01.774607 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:29:01.774611 | orchestrator | Saturday 10 January 2026 14:28:54 +0000 (0:00:00.220) 0:00:34.422 ****** 2026-01-10 14:29:01.774614 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:01.774618 | orchestrator | 2026-01-10 14:29:01.774622 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:29:01.774625 | orchestrator | Saturday 10 January 2026 14:28:54 +0000 (0:00:00.235) 0:00:34.658 ****** 2026-01-10 14:29:01.774630 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:01.774641 | orchestrator | 2026-01-10 14:29:01.774660 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:29:01.774665 | orchestrator | Saturday 10 January 2026 14:28:54 +0000 (0:00:00.205) 0:00:34.864 ****** 2026-01-10 14:29:01.774670 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:01.774674 | orchestrator | 2026-01-10 14:29:01.774678 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:29:01.774683 | orchestrator | Saturday 10 January 2026 14:28:55 +0000 (0:00:00.202) 0:00:35.066 ****** 2026-01-10 14:29:01.774687 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:01.774692 | orchestrator | 2026-01-10 14:29:01.774697 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:29:01.774701 | orchestrator | Saturday 10 January 2026 14:28:55 +0000 (0:00:00.244) 0:00:35.311 ****** 2026-01-10 14:29:01.774705 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:01.774709 | orchestrator | 2026-01-10 14:29:01.774713 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:29:01.774717 | orchestrator | Saturday 10 January 2026 14:28:55 +0000 (0:00:00.218) 0:00:35.529 ****** 2026-01-10 14:29:01.774721 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:01.774724 | orchestrator | 2026-01-10 14:29:01.774728 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:29:01.774732 | orchestrator | Saturday 10 January 2026 14:28:55 +0000 (0:00:00.252) 0:00:35.782 ****** 2026-01-10 14:29:01.774736 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:01.774739 | orchestrator | 2026-01-10 14:29:01.774743 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:29:01.774747 | orchestrator | Saturday 10 January 2026 14:28:55 +0000 (0:00:00.246) 0:00:36.028 ****** 2026-01-10 14:29:01.774751 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-10 14:29:01.774755 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-10 14:29:01.774759 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-10 14:29:01.774763 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-10 14:29:01.774767 | orchestrator | 2026-01-10 14:29:01.774771 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:29:01.774778 | orchestrator | Saturday 10 January 2026 14:28:56 +0000 (0:00:00.889) 0:00:36.918 ****** 2026-01-10 14:29:01.774796 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:01.774802 | orchestrator | 2026-01-10 14:29:01.774816 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:29:01.774825 | orchestrator | Saturday 10 January 2026 14:28:57 +0000 (0:00:00.246) 0:00:37.165 ****** 2026-01-10 14:29:01.774832 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:01.774840 | orchestrator | 2026-01-10 14:29:01.774846 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:29:01.774851 | orchestrator | Saturday 10 January 2026 14:28:57 +0000 (0:00:00.681) 0:00:37.846 ****** 2026-01-10 14:29:01.774857 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:01.774863 | orchestrator | 2026-01-10 14:29:01.774869 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:29:01.774874 | orchestrator | Saturday 10 January 2026 14:28:57 +0000 (0:00:00.212) 0:00:38.059 ****** 2026-01-10 14:29:01.774880 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:01.774886 | orchestrator | 2026-01-10 14:29:01.774891 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-10 14:29:01.774897 | orchestrator | Saturday 10 January 2026 14:28:58 +0000 (0:00:00.216) 0:00:38.275 ****** 2026-01-10 14:29:01.774903 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:01.774909 | orchestrator | 2026-01-10 14:29:01.774915 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-10 14:29:01.774922 | orchestrator | Saturday 10 January 2026 14:28:58 +0000 (0:00:00.142) 0:00:38.417 ****** 2026-01-10 14:29:01.774928 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca'}}) 2026-01-10 14:29:01.774935 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd6926eeb-1396-512c-9972-e44f7d919ea4'}}) 2026-01-10 14:29:01.774941 | orchestrator | 2026-01-10 14:29:01.774945 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-10 14:29:01.774949 | orchestrator | Saturday 10 January 2026 14:28:58 +0000 (0:00:00.187) 0:00:38.605 ****** 2026-01-10 14:29:01.774954 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca', 'data_vg': 'ceph-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca'}) 2026-01-10 14:29:01.774960 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-d6926eeb-1396-512c-9972-e44f7d919ea4', 'data_vg': 'ceph-d6926eeb-1396-512c-9972-e44f7d919ea4'}) 2026-01-10 14:29:01.774964 | orchestrator | 2026-01-10 14:29:01.774967 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-10 14:29:01.774971 | orchestrator | Saturday 10 January 2026 14:29:00 +0000 (0:00:01.756) 0:00:40.361 ****** 2026-01-10 14:29:01.774990 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca', 'data_vg': 'ceph-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca'})  2026-01-10 14:29:01.774997 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d6926eeb-1396-512c-9972-e44f7d919ea4', 'data_vg': 'ceph-d6926eeb-1396-512c-9972-e44f7d919ea4'})  2026-01-10 14:29:01.775000 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:01.775004 | orchestrator | 2026-01-10 14:29:01.775008 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-10 14:29:01.775012 | orchestrator | Saturday 10 January 2026 14:29:00 +0000 (0:00:00.188) 0:00:40.550 ****** 2026-01-10 14:29:01.775015 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca', 'data_vg': 'ceph-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca'}) 2026-01-10 14:29:01.775024 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-d6926eeb-1396-512c-9972-e44f7d919ea4', 'data_vg': 'ceph-d6926eeb-1396-512c-9972-e44f7d919ea4'}) 2026-01-10 14:29:07.599878 | orchestrator | 2026-01-10 14:29:07.599972 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-10 14:29:07.600006 | orchestrator | Saturday 10 January 2026 14:29:01 +0000 (0:00:01.276) 0:00:41.826 ****** 2026-01-10 14:29:07.600030 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca', 'data_vg': 'ceph-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca'})  2026-01-10 14:29:07.600052 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d6926eeb-1396-512c-9972-e44f7d919ea4', 'data_vg': 'ceph-d6926eeb-1396-512c-9972-e44f7d919ea4'})  2026-01-10 14:29:07.600060 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:07.600068 | orchestrator | 2026-01-10 14:29:07.600076 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-10 14:29:07.600091 | orchestrator | Saturday 10 January 2026 14:29:01 +0000 (0:00:00.167) 0:00:41.994 ****** 2026-01-10 14:29:07.600099 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:07.600107 | orchestrator | 2026-01-10 14:29:07.600114 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-10 14:29:07.600122 | orchestrator | Saturday 10 January 2026 14:29:02 +0000 (0:00:00.180) 0:00:42.174 ****** 2026-01-10 14:29:07.600129 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca', 'data_vg': 'ceph-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca'})  2026-01-10 14:29:07.600136 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d6926eeb-1396-512c-9972-e44f7d919ea4', 'data_vg': 'ceph-d6926eeb-1396-512c-9972-e44f7d919ea4'})  2026-01-10 14:29:07.600143 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:07.600151 | orchestrator | 2026-01-10 14:29:07.600158 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-10 14:29:07.600165 | orchestrator | Saturday 10 January 2026 14:29:02 +0000 (0:00:00.187) 0:00:42.362 ****** 2026-01-10 14:29:07.600172 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:07.600179 | orchestrator | 2026-01-10 14:29:07.600186 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-10 14:29:07.600194 | orchestrator | Saturday 10 January 2026 14:29:02 +0000 (0:00:00.168) 0:00:42.530 ****** 2026-01-10 14:29:07.600201 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca', 'data_vg': 'ceph-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca'})  2026-01-10 14:29:07.600208 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d6926eeb-1396-512c-9972-e44f7d919ea4', 'data_vg': 'ceph-d6926eeb-1396-512c-9972-e44f7d919ea4'})  2026-01-10 14:29:07.600215 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:07.600222 | orchestrator | 2026-01-10 14:29:07.600230 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-10 14:29:07.600241 | orchestrator | Saturday 10 January 2026 14:29:02 +0000 (0:00:00.437) 0:00:42.968 ****** 2026-01-10 14:29:07.600248 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:07.600255 | orchestrator | 2026-01-10 14:29:07.600262 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-10 14:29:07.600269 | orchestrator | Saturday 10 January 2026 14:29:03 +0000 (0:00:00.163) 0:00:43.131 ****** 2026-01-10 14:29:07.600276 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca', 'data_vg': 'ceph-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca'})  2026-01-10 14:29:07.600284 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d6926eeb-1396-512c-9972-e44f7d919ea4', 'data_vg': 'ceph-d6926eeb-1396-512c-9972-e44f7d919ea4'})  2026-01-10 14:29:07.600291 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:07.600298 | orchestrator | 2026-01-10 14:29:07.600305 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-10 14:29:07.600312 | orchestrator | Saturday 10 January 2026 14:29:03 +0000 (0:00:00.155) 0:00:43.286 ****** 2026-01-10 14:29:07.600320 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:29:07.600328 | orchestrator | 2026-01-10 14:29:07.600335 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-10 14:29:07.600349 | orchestrator | Saturday 10 January 2026 14:29:03 +0000 (0:00:00.142) 0:00:43.429 ****** 2026-01-10 14:29:07.600356 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca', 'data_vg': 'ceph-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca'})  2026-01-10 14:29:07.600392 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d6926eeb-1396-512c-9972-e44f7d919ea4', 'data_vg': 'ceph-d6926eeb-1396-512c-9972-e44f7d919ea4'})  2026-01-10 14:29:07.600400 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:07.600407 | orchestrator | 2026-01-10 14:29:07.600414 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-10 14:29:07.600422 | orchestrator | Saturday 10 January 2026 14:29:03 +0000 (0:00:00.166) 0:00:43.595 ****** 2026-01-10 14:29:07.600430 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca', 'data_vg': 'ceph-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca'})  2026-01-10 14:29:07.600439 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d6926eeb-1396-512c-9972-e44f7d919ea4', 'data_vg': 'ceph-d6926eeb-1396-512c-9972-e44f7d919ea4'})  2026-01-10 14:29:07.600448 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:07.600456 | orchestrator | 2026-01-10 14:29:07.600465 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-10 14:29:07.600486 | orchestrator | Saturday 10 January 2026 14:29:03 +0000 (0:00:00.187) 0:00:43.783 ****** 2026-01-10 14:29:07.600495 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca', 'data_vg': 'ceph-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca'})  2026-01-10 14:29:07.600503 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d6926eeb-1396-512c-9972-e44f7d919ea4', 'data_vg': 'ceph-d6926eeb-1396-512c-9972-e44f7d919ea4'})  2026-01-10 14:29:07.600511 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:07.600520 | orchestrator | 2026-01-10 14:29:07.600528 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-10 14:29:07.600536 | orchestrator | Saturday 10 January 2026 14:29:03 +0000 (0:00:00.173) 0:00:43.957 ****** 2026-01-10 14:29:07.600545 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:07.600553 | orchestrator | 2026-01-10 14:29:07.600561 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-10 14:29:07.600569 | orchestrator | Saturday 10 January 2026 14:29:04 +0000 (0:00:00.137) 0:00:44.094 ****** 2026-01-10 14:29:07.600578 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:07.600586 | orchestrator | 2026-01-10 14:29:07.600594 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-10 14:29:07.600602 | orchestrator | Saturday 10 January 2026 14:29:04 +0000 (0:00:00.140) 0:00:44.234 ****** 2026-01-10 14:29:07.600611 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:07.600618 | orchestrator | 2026-01-10 14:29:07.600626 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-10 14:29:07.600634 | orchestrator | Saturday 10 January 2026 14:29:04 +0000 (0:00:00.131) 0:00:44.366 ****** 2026-01-10 14:29:07.600642 | orchestrator | ok: [testbed-node-4] => { 2026-01-10 14:29:07.600651 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-10 14:29:07.600659 | orchestrator | } 2026-01-10 14:29:07.600668 | orchestrator | 2026-01-10 14:29:07.600676 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-10 14:29:07.600685 | orchestrator | Saturday 10 January 2026 14:29:04 +0000 (0:00:00.156) 0:00:44.523 ****** 2026-01-10 14:29:07.600693 | orchestrator | ok: [testbed-node-4] => { 2026-01-10 14:29:07.600701 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-10 14:29:07.600710 | orchestrator | } 2026-01-10 14:29:07.600717 | orchestrator | 2026-01-10 14:29:07.600725 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-10 14:29:07.600732 | orchestrator | Saturday 10 January 2026 14:29:04 +0000 (0:00:00.145) 0:00:44.668 ****** 2026-01-10 14:29:07.600744 | orchestrator | ok: [testbed-node-4] => { 2026-01-10 14:29:07.600752 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-10 14:29:07.600759 | orchestrator | } 2026-01-10 14:29:07.600766 | orchestrator | 2026-01-10 14:29:07.600773 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-10 14:29:07.600780 | orchestrator | Saturday 10 January 2026 14:29:04 +0000 (0:00:00.357) 0:00:45.026 ****** 2026-01-10 14:29:07.600788 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:29:07.600795 | orchestrator | 2026-01-10 14:29:07.600802 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-10 14:29:07.600814 | orchestrator | Saturday 10 January 2026 14:29:05 +0000 (0:00:00.533) 0:00:45.560 ****** 2026-01-10 14:29:07.600821 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:29:07.600828 | orchestrator | 2026-01-10 14:29:07.600835 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-10 14:29:07.600843 | orchestrator | Saturday 10 January 2026 14:29:06 +0000 (0:00:00.513) 0:00:46.074 ****** 2026-01-10 14:29:07.600850 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:29:07.600857 | orchestrator | 2026-01-10 14:29:07.600864 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-10 14:29:07.600871 | orchestrator | Saturday 10 January 2026 14:29:06 +0000 (0:00:00.513) 0:00:46.588 ****** 2026-01-10 14:29:07.600878 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:29:07.600885 | orchestrator | 2026-01-10 14:29:07.600892 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-10 14:29:07.600899 | orchestrator | Saturday 10 January 2026 14:29:06 +0000 (0:00:00.142) 0:00:46.730 ****** 2026-01-10 14:29:07.600906 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:07.600914 | orchestrator | 2026-01-10 14:29:07.600921 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-10 14:29:07.600928 | orchestrator | Saturday 10 January 2026 14:29:06 +0000 (0:00:00.113) 0:00:46.844 ****** 2026-01-10 14:29:07.600935 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:07.600942 | orchestrator | 2026-01-10 14:29:07.600970 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-10 14:29:07.600977 | orchestrator | Saturday 10 January 2026 14:29:06 +0000 (0:00:00.115) 0:00:46.959 ****** 2026-01-10 14:29:07.600984 | orchestrator | ok: [testbed-node-4] => { 2026-01-10 14:29:07.600992 | orchestrator |  "vgs_report": { 2026-01-10 14:29:07.601000 | orchestrator |  "vg": [] 2026-01-10 14:29:07.601008 | orchestrator |  } 2026-01-10 14:29:07.601015 | orchestrator | } 2026-01-10 14:29:07.601022 | orchestrator | 2026-01-10 14:29:07.601029 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-10 14:29:07.601037 | orchestrator | Saturday 10 January 2026 14:29:07 +0000 (0:00:00.154) 0:00:47.113 ****** 2026-01-10 14:29:07.601044 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:07.601051 | orchestrator | 2026-01-10 14:29:07.601058 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-10 14:29:07.601066 | orchestrator | Saturday 10 January 2026 14:29:07 +0000 (0:00:00.138) 0:00:47.252 ****** 2026-01-10 14:29:07.601073 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:07.601080 | orchestrator | 2026-01-10 14:29:07.601088 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-10 14:29:07.601095 | orchestrator | Saturday 10 January 2026 14:29:07 +0000 (0:00:00.147) 0:00:47.399 ****** 2026-01-10 14:29:07.601102 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:07.601109 | orchestrator | 2026-01-10 14:29:07.601117 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-10 14:29:07.601124 | orchestrator | Saturday 10 January 2026 14:29:07 +0000 (0:00:00.133) 0:00:47.533 ****** 2026-01-10 14:29:07.601131 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:07.601138 | orchestrator | 2026-01-10 14:29:07.601150 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-10 14:29:12.503422 | orchestrator | Saturday 10 January 2026 14:29:07 +0000 (0:00:00.123) 0:00:47.657 ****** 2026-01-10 14:29:12.504453 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:12.504483 | orchestrator | 2026-01-10 14:29:12.504493 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-10 14:29:12.504502 | orchestrator | Saturday 10 January 2026 14:29:07 +0000 (0:00:00.355) 0:00:48.012 ****** 2026-01-10 14:29:12.504510 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:12.504517 | orchestrator | 2026-01-10 14:29:12.504525 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-10 14:29:12.504533 | orchestrator | Saturday 10 January 2026 14:29:08 +0000 (0:00:00.140) 0:00:48.152 ****** 2026-01-10 14:29:12.504541 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:12.504548 | orchestrator | 2026-01-10 14:29:12.504556 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-10 14:29:12.504564 | orchestrator | Saturday 10 January 2026 14:29:08 +0000 (0:00:00.144) 0:00:48.296 ****** 2026-01-10 14:29:12.504572 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:12.504579 | orchestrator | 2026-01-10 14:29:12.504587 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-10 14:29:12.504595 | orchestrator | Saturday 10 January 2026 14:29:08 +0000 (0:00:00.143) 0:00:48.440 ****** 2026-01-10 14:29:12.504602 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:12.504610 | orchestrator | 2026-01-10 14:29:12.504618 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-10 14:29:12.504625 | orchestrator | Saturday 10 January 2026 14:29:08 +0000 (0:00:00.130) 0:00:48.570 ****** 2026-01-10 14:29:12.504633 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:12.504641 | orchestrator | 2026-01-10 14:29:12.504648 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-10 14:29:12.504656 | orchestrator | Saturday 10 January 2026 14:29:08 +0000 (0:00:00.142) 0:00:48.713 ****** 2026-01-10 14:29:12.504664 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:12.504671 | orchestrator | 2026-01-10 14:29:12.504679 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-10 14:29:12.504687 | orchestrator | Saturday 10 January 2026 14:29:08 +0000 (0:00:00.143) 0:00:48.857 ****** 2026-01-10 14:29:12.504694 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:12.504702 | orchestrator | 2026-01-10 14:29:12.504709 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-10 14:29:12.504717 | orchestrator | Saturday 10 January 2026 14:29:08 +0000 (0:00:00.150) 0:00:49.007 ****** 2026-01-10 14:29:12.504725 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:12.504732 | orchestrator | 2026-01-10 14:29:12.504740 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-10 14:29:12.504748 | orchestrator | Saturday 10 January 2026 14:29:09 +0000 (0:00:00.150) 0:00:49.158 ****** 2026-01-10 14:29:12.504756 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:12.504764 | orchestrator | 2026-01-10 14:29:12.504777 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-10 14:29:12.504791 | orchestrator | Saturday 10 January 2026 14:29:09 +0000 (0:00:00.173) 0:00:49.331 ****** 2026-01-10 14:29:12.504803 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca', 'data_vg': 'ceph-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca'})  2026-01-10 14:29:12.504819 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d6926eeb-1396-512c-9972-e44f7d919ea4', 'data_vg': 'ceph-d6926eeb-1396-512c-9972-e44f7d919ea4'})  2026-01-10 14:29:12.504831 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:12.504843 | orchestrator | 2026-01-10 14:29:12.504855 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-10 14:29:12.504867 | orchestrator | Saturday 10 January 2026 14:29:09 +0000 (0:00:00.163) 0:00:49.495 ****** 2026-01-10 14:29:12.504879 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca', 'data_vg': 'ceph-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca'})  2026-01-10 14:29:12.504902 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d6926eeb-1396-512c-9972-e44f7d919ea4', 'data_vg': 'ceph-d6926eeb-1396-512c-9972-e44f7d919ea4'})  2026-01-10 14:29:12.504917 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:12.504930 | orchestrator | 2026-01-10 14:29:12.504943 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-10 14:29:12.504955 | orchestrator | Saturday 10 January 2026 14:29:09 +0000 (0:00:00.178) 0:00:49.673 ****** 2026-01-10 14:29:12.504967 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca', 'data_vg': 'ceph-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca'})  2026-01-10 14:29:12.504979 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d6926eeb-1396-512c-9972-e44f7d919ea4', 'data_vg': 'ceph-d6926eeb-1396-512c-9972-e44f7d919ea4'})  2026-01-10 14:29:12.504993 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:12.505005 | orchestrator | 2026-01-10 14:29:12.505019 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-10 14:29:12.505031 | orchestrator | Saturday 10 January 2026 14:29:10 +0000 (0:00:00.411) 0:00:50.085 ****** 2026-01-10 14:29:12.505045 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca', 'data_vg': 'ceph-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca'})  2026-01-10 14:29:12.505059 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d6926eeb-1396-512c-9972-e44f7d919ea4', 'data_vg': 'ceph-d6926eeb-1396-512c-9972-e44f7d919ea4'})  2026-01-10 14:29:12.505073 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:12.505087 | orchestrator | 2026-01-10 14:29:12.505125 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-10 14:29:12.505140 | orchestrator | Saturday 10 January 2026 14:29:10 +0000 (0:00:00.157) 0:00:50.242 ****** 2026-01-10 14:29:12.505154 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca', 'data_vg': 'ceph-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca'})  2026-01-10 14:29:12.505167 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d6926eeb-1396-512c-9972-e44f7d919ea4', 'data_vg': 'ceph-d6926eeb-1396-512c-9972-e44f7d919ea4'})  2026-01-10 14:29:12.505181 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:12.505193 | orchestrator | 2026-01-10 14:29:12.505207 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-10 14:29:12.505219 | orchestrator | Saturday 10 January 2026 14:29:10 +0000 (0:00:00.168) 0:00:50.411 ****** 2026-01-10 14:29:12.505232 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca', 'data_vg': 'ceph-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca'})  2026-01-10 14:29:12.505246 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d6926eeb-1396-512c-9972-e44f7d919ea4', 'data_vg': 'ceph-d6926eeb-1396-512c-9972-e44f7d919ea4'})  2026-01-10 14:29:12.505259 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:12.505273 | orchestrator | 2026-01-10 14:29:12.505287 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-10 14:29:12.505300 | orchestrator | Saturday 10 January 2026 14:29:10 +0000 (0:00:00.155) 0:00:50.566 ****** 2026-01-10 14:29:12.505412 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca', 'data_vg': 'ceph-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca'})  2026-01-10 14:29:12.505432 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d6926eeb-1396-512c-9972-e44f7d919ea4', 'data_vg': 'ceph-d6926eeb-1396-512c-9972-e44f7d919ea4'})  2026-01-10 14:29:12.505446 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:12.505459 | orchestrator | 2026-01-10 14:29:12.505473 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-10 14:29:12.505486 | orchestrator | Saturday 10 January 2026 14:29:10 +0000 (0:00:00.162) 0:00:50.729 ****** 2026-01-10 14:29:12.505500 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca', 'data_vg': 'ceph-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca'})  2026-01-10 14:29:12.505525 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d6926eeb-1396-512c-9972-e44f7d919ea4', 'data_vg': 'ceph-d6926eeb-1396-512c-9972-e44f7d919ea4'})  2026-01-10 14:29:12.505544 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:12.505558 | orchestrator | 2026-01-10 14:29:12.505571 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-10 14:29:12.505584 | orchestrator | Saturday 10 January 2026 14:29:10 +0000 (0:00:00.156) 0:00:50.885 ****** 2026-01-10 14:29:12.505597 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:29:12.505612 | orchestrator | 2026-01-10 14:29:12.505625 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-10 14:29:12.505638 | orchestrator | Saturday 10 January 2026 14:29:11 +0000 (0:00:00.519) 0:00:51.404 ****** 2026-01-10 14:29:12.505651 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:29:12.505664 | orchestrator | 2026-01-10 14:29:12.505678 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-10 14:29:12.505691 | orchestrator | Saturday 10 January 2026 14:29:11 +0000 (0:00:00.506) 0:00:51.911 ****** 2026-01-10 14:29:12.505704 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:29:12.505717 | orchestrator | 2026-01-10 14:29:12.505731 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-10 14:29:12.505743 | orchestrator | Saturday 10 January 2026 14:29:12 +0000 (0:00:00.174) 0:00:52.086 ****** 2026-01-10 14:29:12.505757 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca', 'vg_name': 'ceph-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca'}) 2026-01-10 14:29:12.505773 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-d6926eeb-1396-512c-9972-e44f7d919ea4', 'vg_name': 'ceph-d6926eeb-1396-512c-9972-e44f7d919ea4'}) 2026-01-10 14:29:12.505786 | orchestrator | 2026-01-10 14:29:12.505799 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-10 14:29:12.505813 | orchestrator | Saturday 10 January 2026 14:29:12 +0000 (0:00:00.163) 0:00:52.250 ****** 2026-01-10 14:29:12.505827 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca', 'data_vg': 'ceph-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca'})  2026-01-10 14:29:12.505840 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d6926eeb-1396-512c-9972-e44f7d919ea4', 'data_vg': 'ceph-d6926eeb-1396-512c-9972-e44f7d919ea4'})  2026-01-10 14:29:12.505853 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:12.505867 | orchestrator | 2026-01-10 14:29:12.505881 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-10 14:29:12.505893 | orchestrator | Saturday 10 January 2026 14:29:12 +0000 (0:00:00.149) 0:00:52.399 ****** 2026-01-10 14:29:12.505907 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca', 'data_vg': 'ceph-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca'})  2026-01-10 14:29:12.505933 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d6926eeb-1396-512c-9972-e44f7d919ea4', 'data_vg': 'ceph-d6926eeb-1396-512c-9972-e44f7d919ea4'})  2026-01-10 14:29:18.779917 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:18.780030 | orchestrator | 2026-01-10 14:29:18.780047 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-10 14:29:18.780061 | orchestrator | Saturday 10 January 2026 14:29:12 +0000 (0:00:00.161) 0:00:52.561 ****** 2026-01-10 14:29:18.780072 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca', 'data_vg': 'ceph-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca'})  2026-01-10 14:29:18.780086 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d6926eeb-1396-512c-9972-e44f7d919ea4', 'data_vg': 'ceph-d6926eeb-1396-512c-9972-e44f7d919ea4'})  2026-01-10 14:29:18.780097 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:18.780108 | orchestrator | 2026-01-10 14:29:18.780120 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-10 14:29:18.780155 | orchestrator | Saturday 10 January 2026 14:29:12 +0000 (0:00:00.147) 0:00:52.708 ****** 2026-01-10 14:29:18.780167 | orchestrator | ok: [testbed-node-4] => { 2026-01-10 14:29:18.780178 | orchestrator |  "lvm_report": { 2026-01-10 14:29:18.780190 | orchestrator |  "lv": [ 2026-01-10 14:29:18.780201 | orchestrator |  { 2026-01-10 14:29:18.780212 | orchestrator |  "lv_name": "osd-block-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca", 2026-01-10 14:29:18.780224 | orchestrator |  "vg_name": "ceph-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca" 2026-01-10 14:29:18.780234 | orchestrator |  }, 2026-01-10 14:29:18.780245 | orchestrator |  { 2026-01-10 14:29:18.780256 | orchestrator |  "lv_name": "osd-block-d6926eeb-1396-512c-9972-e44f7d919ea4", 2026-01-10 14:29:18.780266 | orchestrator |  "vg_name": "ceph-d6926eeb-1396-512c-9972-e44f7d919ea4" 2026-01-10 14:29:18.780306 | orchestrator |  } 2026-01-10 14:29:18.780318 | orchestrator |  ], 2026-01-10 14:29:18.780328 | orchestrator |  "pv": [ 2026-01-10 14:29:18.780339 | orchestrator |  { 2026-01-10 14:29:18.780350 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-10 14:29:18.780360 | orchestrator |  "vg_name": "ceph-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca" 2026-01-10 14:29:18.780371 | orchestrator |  }, 2026-01-10 14:29:18.780382 | orchestrator |  { 2026-01-10 14:29:18.780418 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-10 14:29:18.780442 | orchestrator |  "vg_name": "ceph-d6926eeb-1396-512c-9972-e44f7d919ea4" 2026-01-10 14:29:18.780465 | orchestrator |  } 2026-01-10 14:29:18.780478 | orchestrator |  ] 2026-01-10 14:29:18.780491 | orchestrator |  } 2026-01-10 14:29:18.780503 | orchestrator | } 2026-01-10 14:29:18.780515 | orchestrator | 2026-01-10 14:29:18.780527 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-10 14:29:18.780539 | orchestrator | 2026-01-10 14:29:18.780551 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-10 14:29:18.780564 | orchestrator | Saturday 10 January 2026 14:29:13 +0000 (0:00:00.550) 0:00:53.258 ****** 2026-01-10 14:29:18.780593 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-10 14:29:18.780606 | orchestrator | 2026-01-10 14:29:18.780618 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-10 14:29:18.780631 | orchestrator | Saturday 10 January 2026 14:29:13 +0000 (0:00:00.245) 0:00:53.503 ****** 2026-01-10 14:29:18.780643 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:29:18.780655 | orchestrator | 2026-01-10 14:29:18.780668 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:29:18.780680 | orchestrator | Saturday 10 January 2026 14:29:13 +0000 (0:00:00.242) 0:00:53.746 ****** 2026-01-10 14:29:18.780693 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-10 14:29:18.780705 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-10 14:29:18.780717 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-10 14:29:18.780729 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-10 14:29:18.780741 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-10 14:29:18.780753 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-10 14:29:18.780765 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-10 14:29:18.780777 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-10 14:29:18.780789 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-10 14:29:18.780802 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-10 14:29:18.780823 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-10 14:29:18.780835 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-10 14:29:18.780846 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-10 14:29:18.780857 | orchestrator | 2026-01-10 14:29:18.780867 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:29:18.780882 | orchestrator | Saturday 10 January 2026 14:29:14 +0000 (0:00:00.440) 0:00:54.186 ****** 2026-01-10 14:29:18.780893 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:18.780903 | orchestrator | 2026-01-10 14:29:18.780914 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:29:18.780925 | orchestrator | Saturday 10 January 2026 14:29:14 +0000 (0:00:00.208) 0:00:54.395 ****** 2026-01-10 14:29:18.780935 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:18.780946 | orchestrator | 2026-01-10 14:29:18.780957 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:29:18.780984 | orchestrator | Saturday 10 January 2026 14:29:14 +0000 (0:00:00.200) 0:00:54.595 ****** 2026-01-10 14:29:18.780995 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:18.781006 | orchestrator | 2026-01-10 14:29:18.781017 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:29:18.781028 | orchestrator | Saturday 10 January 2026 14:29:14 +0000 (0:00:00.197) 0:00:54.792 ****** 2026-01-10 14:29:18.781038 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:18.781049 | orchestrator | 2026-01-10 14:29:18.781060 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:29:18.781070 | orchestrator | Saturday 10 January 2026 14:29:14 +0000 (0:00:00.236) 0:00:55.029 ****** 2026-01-10 14:29:18.781081 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:18.781092 | orchestrator | 2026-01-10 14:29:18.781102 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:29:18.781113 | orchestrator | Saturday 10 January 2026 14:29:15 +0000 (0:00:00.660) 0:00:55.689 ****** 2026-01-10 14:29:18.781124 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:18.781134 | orchestrator | 2026-01-10 14:29:18.781145 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:29:18.781156 | orchestrator | Saturday 10 January 2026 14:29:15 +0000 (0:00:00.201) 0:00:55.891 ****** 2026-01-10 14:29:18.781166 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:18.781177 | orchestrator | 2026-01-10 14:29:18.781187 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:29:18.781198 | orchestrator | Saturday 10 January 2026 14:29:16 +0000 (0:00:00.199) 0:00:56.091 ****** 2026-01-10 14:29:18.781209 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:18.781219 | orchestrator | 2026-01-10 14:29:18.781230 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:29:18.781241 | orchestrator | Saturday 10 January 2026 14:29:16 +0000 (0:00:00.216) 0:00:56.307 ****** 2026-01-10 14:29:18.781252 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456) 2026-01-10 14:29:18.781264 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456) 2026-01-10 14:29:18.781275 | orchestrator | 2026-01-10 14:29:18.781286 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:29:18.781297 | orchestrator | Saturday 10 January 2026 14:29:16 +0000 (0:00:00.448) 0:00:56.756 ****** 2026-01-10 14:29:18.781307 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f0ceade3-8439-47c2-ab29-dba6a2d0af37) 2026-01-10 14:29:18.781319 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f0ceade3-8439-47c2-ab29-dba6a2d0af37) 2026-01-10 14:29:18.781329 | orchestrator | 2026-01-10 14:29:18.781340 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:29:18.781363 | orchestrator | Saturday 10 January 2026 14:29:17 +0000 (0:00:00.466) 0:00:57.223 ****** 2026-01-10 14:29:18.781374 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7172e707-12eb-4bf8-889d-ca95993faa89) 2026-01-10 14:29:18.781385 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7172e707-12eb-4bf8-889d-ca95993faa89) 2026-01-10 14:29:18.781574 | orchestrator | 2026-01-10 14:29:18.781593 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:29:18.781603 | orchestrator | Saturday 10 January 2026 14:29:17 +0000 (0:00:00.439) 0:00:57.663 ****** 2026-01-10 14:29:18.781614 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_573345f2-5167-4bb0-bd40-d392a39279fc) 2026-01-10 14:29:18.781625 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_573345f2-5167-4bb0-bd40-d392a39279fc) 2026-01-10 14:29:18.781636 | orchestrator | 2026-01-10 14:29:18.781646 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:29:18.781657 | orchestrator | Saturday 10 January 2026 14:29:18 +0000 (0:00:00.415) 0:00:58.078 ****** 2026-01-10 14:29:18.781668 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-10 14:29:18.781678 | orchestrator | 2026-01-10 14:29:18.781689 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:29:18.781700 | orchestrator | Saturday 10 January 2026 14:29:18 +0000 (0:00:00.338) 0:00:58.417 ****** 2026-01-10 14:29:18.781710 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-10 14:29:18.781721 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-10 14:29:18.781732 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-10 14:29:18.781742 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-10 14:29:18.781753 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-10 14:29:18.781764 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-10 14:29:18.781774 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-10 14:29:18.781785 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-10 14:29:18.781796 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-10 14:29:18.781806 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-10 14:29:18.781817 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-10 14:29:18.781841 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-10 14:29:28.089799 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-10 14:29:28.089913 | orchestrator | 2026-01-10 14:29:28.089932 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:29:28.089944 | orchestrator | Saturday 10 January 2026 14:29:18 +0000 (0:00:00.412) 0:00:58.830 ****** 2026-01-10 14:29:28.089955 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:28.089981 | orchestrator | 2026-01-10 14:29:28.089993 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:29:28.090004 | orchestrator | Saturday 10 January 2026 14:29:18 +0000 (0:00:00.228) 0:00:59.058 ****** 2026-01-10 14:29:28.090074 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:28.090090 | orchestrator | 2026-01-10 14:29:28.090101 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:29:28.090112 | orchestrator | Saturday 10 January 2026 14:29:19 +0000 (0:00:00.657) 0:00:59.716 ****** 2026-01-10 14:29:28.090123 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:28.090162 | orchestrator | 2026-01-10 14:29:28.090174 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:29:28.090185 | orchestrator | Saturday 10 January 2026 14:29:19 +0000 (0:00:00.209) 0:00:59.926 ****** 2026-01-10 14:29:28.090196 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:28.090206 | orchestrator | 2026-01-10 14:29:28.090217 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:29:28.090228 | orchestrator | Saturday 10 January 2026 14:29:20 +0000 (0:00:00.202) 0:01:00.128 ****** 2026-01-10 14:29:28.090239 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:28.090250 | orchestrator | 2026-01-10 14:29:28.090261 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:29:28.090272 | orchestrator | Saturday 10 January 2026 14:29:20 +0000 (0:00:00.209) 0:01:00.337 ****** 2026-01-10 14:29:28.090283 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:28.090293 | orchestrator | 2026-01-10 14:29:28.090304 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:29:28.090315 | orchestrator | Saturday 10 January 2026 14:29:20 +0000 (0:00:00.214) 0:01:00.552 ****** 2026-01-10 14:29:28.090327 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:28.090340 | orchestrator | 2026-01-10 14:29:28.090353 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:29:28.090365 | orchestrator | Saturday 10 January 2026 14:29:20 +0000 (0:00:00.216) 0:01:00.768 ****** 2026-01-10 14:29:28.090378 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:28.090390 | orchestrator | 2026-01-10 14:29:28.090403 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:29:28.090442 | orchestrator | Saturday 10 January 2026 14:29:20 +0000 (0:00:00.180) 0:01:00.949 ****** 2026-01-10 14:29:28.090454 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-10 14:29:28.090466 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-10 14:29:28.090478 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-10 14:29:28.090488 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-10 14:29:28.090499 | orchestrator | 2026-01-10 14:29:28.090510 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:29:28.090521 | orchestrator | Saturday 10 January 2026 14:29:21 +0000 (0:00:00.645) 0:01:01.594 ****** 2026-01-10 14:29:28.090531 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:28.090542 | orchestrator | 2026-01-10 14:29:28.090553 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:29:28.090564 | orchestrator | Saturday 10 January 2026 14:29:21 +0000 (0:00:00.215) 0:01:01.810 ****** 2026-01-10 14:29:28.090575 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:28.090585 | orchestrator | 2026-01-10 14:29:28.090596 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:29:28.090607 | orchestrator | Saturday 10 January 2026 14:29:21 +0000 (0:00:00.212) 0:01:02.022 ****** 2026-01-10 14:29:28.090618 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:28.090628 | orchestrator | 2026-01-10 14:29:28.090639 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:29:28.090649 | orchestrator | Saturday 10 January 2026 14:29:22 +0000 (0:00:00.201) 0:01:02.224 ****** 2026-01-10 14:29:28.090660 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:28.090671 | orchestrator | 2026-01-10 14:29:28.090681 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-10 14:29:28.090692 | orchestrator | Saturday 10 January 2026 14:29:22 +0000 (0:00:00.199) 0:01:02.423 ****** 2026-01-10 14:29:28.090703 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:28.090713 | orchestrator | 2026-01-10 14:29:28.090724 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-10 14:29:28.090735 | orchestrator | Saturday 10 January 2026 14:29:22 +0000 (0:00:00.332) 0:01:02.756 ****** 2026-01-10 14:29:28.090745 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '377cb61f-8fa6-58d2-888b-072b5e96ec0c'}}) 2026-01-10 14:29:28.090765 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '82a5292d-e4f5-5675-b04e-23ddf5e1abb7'}}) 2026-01-10 14:29:28.090776 | orchestrator | 2026-01-10 14:29:28.090787 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-10 14:29:28.090798 | orchestrator | Saturday 10 January 2026 14:29:22 +0000 (0:00:00.201) 0:01:02.957 ****** 2026-01-10 14:29:28.090810 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-377cb61f-8fa6-58d2-888b-072b5e96ec0c', 'data_vg': 'ceph-377cb61f-8fa6-58d2-888b-072b5e96ec0c'}) 2026-01-10 14:29:28.090840 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-82a5292d-e4f5-5675-b04e-23ddf5e1abb7', 'data_vg': 'ceph-82a5292d-e4f5-5675-b04e-23ddf5e1abb7'}) 2026-01-10 14:29:28.090852 | orchestrator | 2026-01-10 14:29:28.090863 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-10 14:29:28.090894 | orchestrator | Saturday 10 January 2026 14:29:24 +0000 (0:00:02.060) 0:01:05.018 ****** 2026-01-10 14:29:28.090906 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-377cb61f-8fa6-58d2-888b-072b5e96ec0c', 'data_vg': 'ceph-377cb61f-8fa6-58d2-888b-072b5e96ec0c'})  2026-01-10 14:29:28.090919 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-82a5292d-e4f5-5675-b04e-23ddf5e1abb7', 'data_vg': 'ceph-82a5292d-e4f5-5675-b04e-23ddf5e1abb7'})  2026-01-10 14:29:28.090930 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:28.090940 | orchestrator | 2026-01-10 14:29:28.090951 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-10 14:29:28.090962 | orchestrator | Saturday 10 January 2026 14:29:25 +0000 (0:00:00.162) 0:01:05.180 ****** 2026-01-10 14:29:28.090974 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-377cb61f-8fa6-58d2-888b-072b5e96ec0c', 'data_vg': 'ceph-377cb61f-8fa6-58d2-888b-072b5e96ec0c'}) 2026-01-10 14:29:28.090985 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-82a5292d-e4f5-5675-b04e-23ddf5e1abb7', 'data_vg': 'ceph-82a5292d-e4f5-5675-b04e-23ddf5e1abb7'}) 2026-01-10 14:29:28.090996 | orchestrator | 2026-01-10 14:29:28.091006 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-10 14:29:28.091017 | orchestrator | Saturday 10 January 2026 14:29:26 +0000 (0:00:01.395) 0:01:06.575 ****** 2026-01-10 14:29:28.091028 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-377cb61f-8fa6-58d2-888b-072b5e96ec0c', 'data_vg': 'ceph-377cb61f-8fa6-58d2-888b-072b5e96ec0c'})  2026-01-10 14:29:28.091039 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-82a5292d-e4f5-5675-b04e-23ddf5e1abb7', 'data_vg': 'ceph-82a5292d-e4f5-5675-b04e-23ddf5e1abb7'})  2026-01-10 14:29:28.091050 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:28.091060 | orchestrator | 2026-01-10 14:29:28.091071 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-10 14:29:28.091082 | orchestrator | Saturday 10 January 2026 14:29:26 +0000 (0:00:00.146) 0:01:06.722 ****** 2026-01-10 14:29:28.091093 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:28.091103 | orchestrator | 2026-01-10 14:29:28.091115 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-10 14:29:28.091125 | orchestrator | Saturday 10 January 2026 14:29:26 +0000 (0:00:00.145) 0:01:06.867 ****** 2026-01-10 14:29:28.091141 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-377cb61f-8fa6-58d2-888b-072b5e96ec0c', 'data_vg': 'ceph-377cb61f-8fa6-58d2-888b-072b5e96ec0c'})  2026-01-10 14:29:28.091153 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-82a5292d-e4f5-5675-b04e-23ddf5e1abb7', 'data_vg': 'ceph-82a5292d-e4f5-5675-b04e-23ddf5e1abb7'})  2026-01-10 14:29:28.091164 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:28.091175 | orchestrator | 2026-01-10 14:29:28.091185 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-10 14:29:28.091196 | orchestrator | Saturday 10 January 2026 14:29:26 +0000 (0:00:00.163) 0:01:07.031 ****** 2026-01-10 14:29:28.091214 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:28.091225 | orchestrator | 2026-01-10 14:29:28.091236 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-10 14:29:28.091246 | orchestrator | Saturday 10 January 2026 14:29:27 +0000 (0:00:00.142) 0:01:07.174 ****** 2026-01-10 14:29:28.091257 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-377cb61f-8fa6-58d2-888b-072b5e96ec0c', 'data_vg': 'ceph-377cb61f-8fa6-58d2-888b-072b5e96ec0c'})  2026-01-10 14:29:28.091268 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-82a5292d-e4f5-5675-b04e-23ddf5e1abb7', 'data_vg': 'ceph-82a5292d-e4f5-5675-b04e-23ddf5e1abb7'})  2026-01-10 14:29:28.091279 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:28.091290 | orchestrator | 2026-01-10 14:29:28.091301 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-10 14:29:28.091311 | orchestrator | Saturday 10 January 2026 14:29:27 +0000 (0:00:00.152) 0:01:07.327 ****** 2026-01-10 14:29:28.091322 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:28.091333 | orchestrator | 2026-01-10 14:29:28.091343 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-10 14:29:28.091354 | orchestrator | Saturday 10 January 2026 14:29:27 +0000 (0:00:00.141) 0:01:07.468 ****** 2026-01-10 14:29:28.091365 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-377cb61f-8fa6-58d2-888b-072b5e96ec0c', 'data_vg': 'ceph-377cb61f-8fa6-58d2-888b-072b5e96ec0c'})  2026-01-10 14:29:28.091376 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-82a5292d-e4f5-5675-b04e-23ddf5e1abb7', 'data_vg': 'ceph-82a5292d-e4f5-5675-b04e-23ddf5e1abb7'})  2026-01-10 14:29:28.091387 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:28.091397 | orchestrator | 2026-01-10 14:29:28.091408 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-10 14:29:28.091455 | orchestrator | Saturday 10 January 2026 14:29:27 +0000 (0:00:00.156) 0:01:07.624 ****** 2026-01-10 14:29:28.091473 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:29:28.091491 | orchestrator | 2026-01-10 14:29:28.091508 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-10 14:29:28.091525 | orchestrator | Saturday 10 January 2026 14:29:27 +0000 (0:00:00.344) 0:01:07.968 ****** 2026-01-10 14:29:28.091553 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-377cb61f-8fa6-58d2-888b-072b5e96ec0c', 'data_vg': 'ceph-377cb61f-8fa6-58d2-888b-072b5e96ec0c'})  2026-01-10 14:29:34.389602 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-82a5292d-e4f5-5675-b04e-23ddf5e1abb7', 'data_vg': 'ceph-82a5292d-e4f5-5675-b04e-23ddf5e1abb7'})  2026-01-10 14:29:34.389689 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:34.389700 | orchestrator | 2026-01-10 14:29:34.389707 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-10 14:29:34.389714 | orchestrator | Saturday 10 January 2026 14:29:28 +0000 (0:00:00.178) 0:01:08.147 ****** 2026-01-10 14:29:34.389720 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-377cb61f-8fa6-58d2-888b-072b5e96ec0c', 'data_vg': 'ceph-377cb61f-8fa6-58d2-888b-072b5e96ec0c'})  2026-01-10 14:29:34.389726 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-82a5292d-e4f5-5675-b04e-23ddf5e1abb7', 'data_vg': 'ceph-82a5292d-e4f5-5675-b04e-23ddf5e1abb7'})  2026-01-10 14:29:34.389732 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:34.389737 | orchestrator | 2026-01-10 14:29:34.389743 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-10 14:29:34.389749 | orchestrator | Saturday 10 January 2026 14:29:28 +0000 (0:00:00.158) 0:01:08.306 ****** 2026-01-10 14:29:34.389754 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-377cb61f-8fa6-58d2-888b-072b5e96ec0c', 'data_vg': 'ceph-377cb61f-8fa6-58d2-888b-072b5e96ec0c'})  2026-01-10 14:29:34.389759 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-82a5292d-e4f5-5675-b04e-23ddf5e1abb7', 'data_vg': 'ceph-82a5292d-e4f5-5675-b04e-23ddf5e1abb7'})  2026-01-10 14:29:34.389783 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:34.389788 | orchestrator | 2026-01-10 14:29:34.389794 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-10 14:29:34.389799 | orchestrator | Saturday 10 January 2026 14:29:28 +0000 (0:00:00.162) 0:01:08.468 ****** 2026-01-10 14:29:34.389805 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:34.389810 | orchestrator | 2026-01-10 14:29:34.389815 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-10 14:29:34.389820 | orchestrator | Saturday 10 January 2026 14:29:28 +0000 (0:00:00.129) 0:01:08.597 ****** 2026-01-10 14:29:34.389826 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:34.389831 | orchestrator | 2026-01-10 14:29:34.389836 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-10 14:29:34.389842 | orchestrator | Saturday 10 January 2026 14:29:28 +0000 (0:00:00.153) 0:01:08.751 ****** 2026-01-10 14:29:34.389847 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:34.389852 | orchestrator | 2026-01-10 14:29:34.389868 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-10 14:29:34.389874 | orchestrator | Saturday 10 January 2026 14:29:28 +0000 (0:00:00.133) 0:01:08.884 ****** 2026-01-10 14:29:34.389879 | orchestrator | ok: [testbed-node-5] => { 2026-01-10 14:29:34.389885 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-10 14:29:34.389890 | orchestrator | } 2026-01-10 14:29:34.389896 | orchestrator | 2026-01-10 14:29:34.389902 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-10 14:29:34.389907 | orchestrator | Saturday 10 January 2026 14:29:28 +0000 (0:00:00.142) 0:01:09.027 ****** 2026-01-10 14:29:34.389912 | orchestrator | ok: [testbed-node-5] => { 2026-01-10 14:29:34.389918 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-10 14:29:34.389923 | orchestrator | } 2026-01-10 14:29:34.389928 | orchestrator | 2026-01-10 14:29:34.389934 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-10 14:29:34.389939 | orchestrator | Saturday 10 January 2026 14:29:29 +0000 (0:00:00.148) 0:01:09.176 ****** 2026-01-10 14:29:34.389944 | orchestrator | ok: [testbed-node-5] => { 2026-01-10 14:29:34.389950 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-10 14:29:34.389955 | orchestrator | } 2026-01-10 14:29:34.389960 | orchestrator | 2026-01-10 14:29:34.389965 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-10 14:29:34.389971 | orchestrator | Saturday 10 January 2026 14:29:29 +0000 (0:00:00.148) 0:01:09.325 ****** 2026-01-10 14:29:34.389976 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:29:34.389981 | orchestrator | 2026-01-10 14:29:34.389987 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-10 14:29:34.389992 | orchestrator | Saturday 10 January 2026 14:29:29 +0000 (0:00:00.570) 0:01:09.896 ****** 2026-01-10 14:29:34.389997 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:29:34.390002 | orchestrator | 2026-01-10 14:29:34.390008 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-10 14:29:34.390053 | orchestrator | Saturday 10 January 2026 14:29:30 +0000 (0:00:00.542) 0:01:10.438 ****** 2026-01-10 14:29:34.390060 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:29:34.390065 | orchestrator | 2026-01-10 14:29:34.390071 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-10 14:29:34.390076 | orchestrator | Saturday 10 January 2026 14:29:31 +0000 (0:00:00.763) 0:01:11.202 ****** 2026-01-10 14:29:34.390082 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:29:34.390087 | orchestrator | 2026-01-10 14:29:34.390092 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-10 14:29:34.390098 | orchestrator | Saturday 10 January 2026 14:29:31 +0000 (0:00:00.154) 0:01:11.356 ****** 2026-01-10 14:29:34.390103 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:34.390108 | orchestrator | 2026-01-10 14:29:34.390114 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-10 14:29:34.390124 | orchestrator | Saturday 10 January 2026 14:29:31 +0000 (0:00:00.113) 0:01:11.470 ****** 2026-01-10 14:29:34.390129 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:34.390135 | orchestrator | 2026-01-10 14:29:34.390140 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-10 14:29:34.390145 | orchestrator | Saturday 10 January 2026 14:29:31 +0000 (0:00:00.105) 0:01:11.576 ****** 2026-01-10 14:29:34.390151 | orchestrator | ok: [testbed-node-5] => { 2026-01-10 14:29:34.390157 | orchestrator |  "vgs_report": { 2026-01-10 14:29:34.390163 | orchestrator |  "vg": [] 2026-01-10 14:29:34.390181 | orchestrator |  } 2026-01-10 14:29:34.390188 | orchestrator | } 2026-01-10 14:29:34.390194 | orchestrator | 2026-01-10 14:29:34.390200 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-10 14:29:34.390206 | orchestrator | Saturday 10 January 2026 14:29:31 +0000 (0:00:00.170) 0:01:11.746 ****** 2026-01-10 14:29:34.390212 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:34.390218 | orchestrator | 2026-01-10 14:29:34.390224 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-10 14:29:34.390230 | orchestrator | Saturday 10 January 2026 14:29:31 +0000 (0:00:00.140) 0:01:11.887 ****** 2026-01-10 14:29:34.390236 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:34.390242 | orchestrator | 2026-01-10 14:29:34.390248 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-10 14:29:34.390253 | orchestrator | Saturday 10 January 2026 14:29:31 +0000 (0:00:00.142) 0:01:12.029 ****** 2026-01-10 14:29:34.390259 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:34.390264 | orchestrator | 2026-01-10 14:29:34.390269 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-10 14:29:34.390274 | orchestrator | Saturday 10 January 2026 14:29:32 +0000 (0:00:00.135) 0:01:12.165 ****** 2026-01-10 14:29:34.390280 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:34.390285 | orchestrator | 2026-01-10 14:29:34.390291 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-10 14:29:34.390296 | orchestrator | Saturday 10 January 2026 14:29:32 +0000 (0:00:00.158) 0:01:12.323 ****** 2026-01-10 14:29:34.390301 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:34.390307 | orchestrator | 2026-01-10 14:29:34.390312 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-10 14:29:34.390317 | orchestrator | Saturday 10 January 2026 14:29:32 +0000 (0:00:00.145) 0:01:12.468 ****** 2026-01-10 14:29:34.390323 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:34.390328 | orchestrator | 2026-01-10 14:29:34.390333 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-10 14:29:34.390339 | orchestrator | Saturday 10 January 2026 14:29:32 +0000 (0:00:00.140) 0:01:12.609 ****** 2026-01-10 14:29:34.390344 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:34.390349 | orchestrator | 2026-01-10 14:29:34.390355 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-10 14:29:34.390360 | orchestrator | Saturday 10 January 2026 14:29:32 +0000 (0:00:00.141) 0:01:12.750 ****** 2026-01-10 14:29:34.390365 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:34.390370 | orchestrator | 2026-01-10 14:29:34.390376 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-10 14:29:34.390381 | orchestrator | Saturday 10 January 2026 14:29:33 +0000 (0:00:00.340) 0:01:13.090 ****** 2026-01-10 14:29:34.390386 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:34.390392 | orchestrator | 2026-01-10 14:29:34.390401 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-10 14:29:34.390406 | orchestrator | Saturday 10 January 2026 14:29:33 +0000 (0:00:00.139) 0:01:13.230 ****** 2026-01-10 14:29:34.390412 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:34.390417 | orchestrator | 2026-01-10 14:29:34.390422 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-10 14:29:34.390489 | orchestrator | Saturday 10 January 2026 14:29:33 +0000 (0:00:00.148) 0:01:13.379 ****** 2026-01-10 14:29:34.390501 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:34.390507 | orchestrator | 2026-01-10 14:29:34.390512 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-10 14:29:34.390518 | orchestrator | Saturday 10 January 2026 14:29:33 +0000 (0:00:00.135) 0:01:13.515 ****** 2026-01-10 14:29:34.390523 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:34.390528 | orchestrator | 2026-01-10 14:29:34.390534 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-10 14:29:34.390539 | orchestrator | Saturday 10 January 2026 14:29:33 +0000 (0:00:00.147) 0:01:13.662 ****** 2026-01-10 14:29:34.390544 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:34.390550 | orchestrator | 2026-01-10 14:29:34.390555 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-10 14:29:34.390561 | orchestrator | Saturday 10 January 2026 14:29:33 +0000 (0:00:00.134) 0:01:13.797 ****** 2026-01-10 14:29:34.390566 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:34.390571 | orchestrator | 2026-01-10 14:29:34.390577 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-10 14:29:34.390582 | orchestrator | Saturday 10 January 2026 14:29:33 +0000 (0:00:00.147) 0:01:13.945 ****** 2026-01-10 14:29:34.390588 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-377cb61f-8fa6-58d2-888b-072b5e96ec0c', 'data_vg': 'ceph-377cb61f-8fa6-58d2-888b-072b5e96ec0c'})  2026-01-10 14:29:34.390593 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-82a5292d-e4f5-5675-b04e-23ddf5e1abb7', 'data_vg': 'ceph-82a5292d-e4f5-5675-b04e-23ddf5e1abb7'})  2026-01-10 14:29:34.390599 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:34.390604 | orchestrator | 2026-01-10 14:29:34.390610 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-10 14:29:34.390615 | orchestrator | Saturday 10 January 2026 14:29:34 +0000 (0:00:00.165) 0:01:14.110 ****** 2026-01-10 14:29:34.390620 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-377cb61f-8fa6-58d2-888b-072b5e96ec0c', 'data_vg': 'ceph-377cb61f-8fa6-58d2-888b-072b5e96ec0c'})  2026-01-10 14:29:34.390626 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-82a5292d-e4f5-5675-b04e-23ddf5e1abb7', 'data_vg': 'ceph-82a5292d-e4f5-5675-b04e-23ddf5e1abb7'})  2026-01-10 14:29:34.390631 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:34.390636 | orchestrator | 2026-01-10 14:29:34.390642 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-10 14:29:34.390647 | orchestrator | Saturday 10 January 2026 14:29:34 +0000 (0:00:00.177) 0:01:14.288 ****** 2026-01-10 14:29:34.390658 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-377cb61f-8fa6-58d2-888b-072b5e96ec0c', 'data_vg': 'ceph-377cb61f-8fa6-58d2-888b-072b5e96ec0c'})  2026-01-10 14:29:37.586252 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-82a5292d-e4f5-5675-b04e-23ddf5e1abb7', 'data_vg': 'ceph-82a5292d-e4f5-5675-b04e-23ddf5e1abb7'})  2026-01-10 14:29:37.586342 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:37.586352 | orchestrator | 2026-01-10 14:29:37.586358 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-10 14:29:37.586368 | orchestrator | Saturday 10 January 2026 14:29:34 +0000 (0:00:00.160) 0:01:14.448 ****** 2026-01-10 14:29:37.586374 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-377cb61f-8fa6-58d2-888b-072b5e96ec0c', 'data_vg': 'ceph-377cb61f-8fa6-58d2-888b-072b5e96ec0c'})  2026-01-10 14:29:37.586381 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-82a5292d-e4f5-5675-b04e-23ddf5e1abb7', 'data_vg': 'ceph-82a5292d-e4f5-5675-b04e-23ddf5e1abb7'})  2026-01-10 14:29:37.586388 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:37.586396 | orchestrator | 2026-01-10 14:29:37.586400 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-10 14:29:37.586405 | orchestrator | Saturday 10 January 2026 14:29:34 +0000 (0:00:00.158) 0:01:14.607 ****** 2026-01-10 14:29:37.586424 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-377cb61f-8fa6-58d2-888b-072b5e96ec0c', 'data_vg': 'ceph-377cb61f-8fa6-58d2-888b-072b5e96ec0c'})  2026-01-10 14:29:37.586429 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-82a5292d-e4f5-5675-b04e-23ddf5e1abb7', 'data_vg': 'ceph-82a5292d-e4f5-5675-b04e-23ddf5e1abb7'})  2026-01-10 14:29:37.586433 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:37.586476 | orchestrator | 2026-01-10 14:29:37.586480 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-10 14:29:37.586484 | orchestrator | Saturday 10 January 2026 14:29:34 +0000 (0:00:00.165) 0:01:14.772 ****** 2026-01-10 14:29:37.586488 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-377cb61f-8fa6-58d2-888b-072b5e96ec0c', 'data_vg': 'ceph-377cb61f-8fa6-58d2-888b-072b5e96ec0c'})  2026-01-10 14:29:37.586492 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-82a5292d-e4f5-5675-b04e-23ddf5e1abb7', 'data_vg': 'ceph-82a5292d-e4f5-5675-b04e-23ddf5e1abb7'})  2026-01-10 14:29:37.586496 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:37.586499 | orchestrator | 2026-01-10 14:29:37.586503 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-10 14:29:37.586507 | orchestrator | Saturday 10 January 2026 14:29:35 +0000 (0:00:00.366) 0:01:15.139 ****** 2026-01-10 14:29:37.586511 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-377cb61f-8fa6-58d2-888b-072b5e96ec0c', 'data_vg': 'ceph-377cb61f-8fa6-58d2-888b-072b5e96ec0c'})  2026-01-10 14:29:37.586515 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-82a5292d-e4f5-5675-b04e-23ddf5e1abb7', 'data_vg': 'ceph-82a5292d-e4f5-5675-b04e-23ddf5e1abb7'})  2026-01-10 14:29:37.586519 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:37.586523 | orchestrator | 2026-01-10 14:29:37.586526 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-10 14:29:37.586530 | orchestrator | Saturday 10 January 2026 14:29:35 +0000 (0:00:00.163) 0:01:15.302 ****** 2026-01-10 14:29:37.586534 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-377cb61f-8fa6-58d2-888b-072b5e96ec0c', 'data_vg': 'ceph-377cb61f-8fa6-58d2-888b-072b5e96ec0c'})  2026-01-10 14:29:37.586537 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-82a5292d-e4f5-5675-b04e-23ddf5e1abb7', 'data_vg': 'ceph-82a5292d-e4f5-5675-b04e-23ddf5e1abb7'})  2026-01-10 14:29:37.586541 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:37.586545 | orchestrator | 2026-01-10 14:29:37.586548 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-10 14:29:37.586552 | orchestrator | Saturday 10 January 2026 14:29:35 +0000 (0:00:00.178) 0:01:15.481 ****** 2026-01-10 14:29:37.586556 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:29:37.586561 | orchestrator | 2026-01-10 14:29:37.586564 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-10 14:29:37.586568 | orchestrator | Saturday 10 January 2026 14:29:35 +0000 (0:00:00.566) 0:01:16.048 ****** 2026-01-10 14:29:37.586574 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:29:37.586580 | orchestrator | 2026-01-10 14:29:37.586585 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-10 14:29:37.586591 | orchestrator | Saturday 10 January 2026 14:29:36 +0000 (0:00:00.551) 0:01:16.599 ****** 2026-01-10 14:29:37.586597 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:29:37.586602 | orchestrator | 2026-01-10 14:29:37.586608 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-10 14:29:37.586613 | orchestrator | Saturday 10 January 2026 14:29:36 +0000 (0:00:00.143) 0:01:16.743 ****** 2026-01-10 14:29:37.586618 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-377cb61f-8fa6-58d2-888b-072b5e96ec0c', 'vg_name': 'ceph-377cb61f-8fa6-58d2-888b-072b5e96ec0c'}) 2026-01-10 14:29:37.586626 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-82a5292d-e4f5-5675-b04e-23ddf5e1abb7', 'vg_name': 'ceph-82a5292d-e4f5-5675-b04e-23ddf5e1abb7'}) 2026-01-10 14:29:37.586648 | orchestrator | 2026-01-10 14:29:37.586660 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-10 14:29:37.586664 | orchestrator | Saturday 10 January 2026 14:29:36 +0000 (0:00:00.179) 0:01:16.922 ****** 2026-01-10 14:29:37.586694 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-377cb61f-8fa6-58d2-888b-072b5e96ec0c', 'data_vg': 'ceph-377cb61f-8fa6-58d2-888b-072b5e96ec0c'})  2026-01-10 14:29:37.586699 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-82a5292d-e4f5-5675-b04e-23ddf5e1abb7', 'data_vg': 'ceph-82a5292d-e4f5-5675-b04e-23ddf5e1abb7'})  2026-01-10 14:29:37.586702 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:37.586706 | orchestrator | 2026-01-10 14:29:37.586710 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-10 14:29:37.586714 | orchestrator | Saturday 10 January 2026 14:29:37 +0000 (0:00:00.190) 0:01:17.113 ****** 2026-01-10 14:29:37.586718 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-377cb61f-8fa6-58d2-888b-072b5e96ec0c', 'data_vg': 'ceph-377cb61f-8fa6-58d2-888b-072b5e96ec0c'})  2026-01-10 14:29:37.586722 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-82a5292d-e4f5-5675-b04e-23ddf5e1abb7', 'data_vg': 'ceph-82a5292d-e4f5-5675-b04e-23ddf5e1abb7'})  2026-01-10 14:29:37.586726 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:37.586729 | orchestrator | 2026-01-10 14:29:37.586733 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-10 14:29:37.586737 | orchestrator | Saturday 10 January 2026 14:29:37 +0000 (0:00:00.160) 0:01:17.274 ****** 2026-01-10 14:29:37.586740 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-377cb61f-8fa6-58d2-888b-072b5e96ec0c', 'data_vg': 'ceph-377cb61f-8fa6-58d2-888b-072b5e96ec0c'})  2026-01-10 14:29:37.586744 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-82a5292d-e4f5-5675-b04e-23ddf5e1abb7', 'data_vg': 'ceph-82a5292d-e4f5-5675-b04e-23ddf5e1abb7'})  2026-01-10 14:29:37.586748 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:37.586752 | orchestrator | 2026-01-10 14:29:37.586756 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-10 14:29:37.586759 | orchestrator | Saturday 10 January 2026 14:29:37 +0000 (0:00:00.175) 0:01:17.450 ****** 2026-01-10 14:29:37.586763 | orchestrator | ok: [testbed-node-5] => { 2026-01-10 14:29:37.586767 | orchestrator |  "lvm_report": { 2026-01-10 14:29:37.586771 | orchestrator |  "lv": [ 2026-01-10 14:29:37.586775 | orchestrator |  { 2026-01-10 14:29:37.586780 | orchestrator |  "lv_name": "osd-block-377cb61f-8fa6-58d2-888b-072b5e96ec0c", 2026-01-10 14:29:37.586787 | orchestrator |  "vg_name": "ceph-377cb61f-8fa6-58d2-888b-072b5e96ec0c" 2026-01-10 14:29:37.586792 | orchestrator |  }, 2026-01-10 14:29:37.586796 | orchestrator |  { 2026-01-10 14:29:37.586801 | orchestrator |  "lv_name": "osd-block-82a5292d-e4f5-5675-b04e-23ddf5e1abb7", 2026-01-10 14:29:37.586805 | orchestrator |  "vg_name": "ceph-82a5292d-e4f5-5675-b04e-23ddf5e1abb7" 2026-01-10 14:29:37.586809 | orchestrator |  } 2026-01-10 14:29:37.586813 | orchestrator |  ], 2026-01-10 14:29:37.586818 | orchestrator |  "pv": [ 2026-01-10 14:29:37.586822 | orchestrator |  { 2026-01-10 14:29:37.586826 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-10 14:29:37.586830 | orchestrator |  "vg_name": "ceph-377cb61f-8fa6-58d2-888b-072b5e96ec0c" 2026-01-10 14:29:37.586834 | orchestrator |  }, 2026-01-10 14:29:37.586838 | orchestrator |  { 2026-01-10 14:29:37.586842 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-10 14:29:37.586847 | orchestrator |  "vg_name": "ceph-82a5292d-e4f5-5675-b04e-23ddf5e1abb7" 2026-01-10 14:29:37.586851 | orchestrator |  } 2026-01-10 14:29:37.586855 | orchestrator |  ] 2026-01-10 14:29:37.586860 | orchestrator |  } 2026-01-10 14:29:37.586864 | orchestrator | } 2026-01-10 14:29:37.586872 | orchestrator | 2026-01-10 14:29:37.586877 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:29:37.586881 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-10 14:29:37.586885 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-10 14:29:37.586890 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-10 14:29:37.586894 | orchestrator | 2026-01-10 14:29:37.586898 | orchestrator | 2026-01-10 14:29:37.586902 | orchestrator | 2026-01-10 14:29:37.586906 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:29:37.586911 | orchestrator | Saturday 10 January 2026 14:29:37 +0000 (0:00:00.170) 0:01:17.621 ****** 2026-01-10 14:29:37.586915 | orchestrator | =============================================================================== 2026-01-10 14:29:37.586919 | orchestrator | Create block VGs -------------------------------------------------------- 5.99s 2026-01-10 14:29:37.586923 | orchestrator | Create block LVs -------------------------------------------------------- 4.25s 2026-01-10 14:29:37.586927 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.87s 2026-01-10 14:29:37.586932 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.85s 2026-01-10 14:29:37.586936 | orchestrator | Add known partitions to the list of available block devices ------------- 1.81s 2026-01-10 14:29:37.586940 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.66s 2026-01-10 14:29:37.586945 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.63s 2026-01-10 14:29:37.586948 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.56s 2026-01-10 14:29:37.586956 | orchestrator | Add known links to the list of available block devices ------------------ 1.42s 2026-01-10 14:29:38.006870 | orchestrator | Add known partitions to the list of available block devices ------------- 1.18s 2026-01-10 14:29:38.006974 | orchestrator | Print LVM report data --------------------------------------------------- 1.09s 2026-01-10 14:29:38.006989 | orchestrator | Add known links to the list of available block devices ------------------ 0.93s 2026-01-10 14:29:38.007002 | orchestrator | Add known partitions to the list of available block devices ------------- 0.89s 2026-01-10 14:29:38.007013 | orchestrator | Add known links to the list of available block devices ------------------ 0.81s 2026-01-10 14:29:38.007023 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.78s 2026-01-10 14:29:38.007034 | orchestrator | Print 'Create WAL VGs' -------------------------------------------------- 0.77s 2026-01-10 14:29:38.007044 | orchestrator | Get initial list of available block devices ----------------------------- 0.77s 2026-01-10 14:29:38.007055 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.76s 2026-01-10 14:29:38.007065 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.75s 2026-01-10 14:29:38.007076 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.74s 2026-01-10 14:29:50.234402 | orchestrator | 2026-01-10 14:29:50 | INFO  | Task 841d8567-eca5-4a8a-bc52-82982eda75db (facts) was prepared for execution. 2026-01-10 14:29:50.234628 | orchestrator | 2026-01-10 14:29:50 | INFO  | It takes a moment until task 841d8567-eca5-4a8a-bc52-82982eda75db (facts) has been started and output is visible here. 2026-01-10 14:30:02.284350 | orchestrator | 2026-01-10 14:30:02.284451 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-10 14:30:02.284461 | orchestrator | 2026-01-10 14:30:02.284469 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-10 14:30:02.284478 | orchestrator | Saturday 10 January 2026 14:29:54 +0000 (0:00:00.239) 0:00:00.239 ****** 2026-01-10 14:30:02.284543 | orchestrator | ok: [testbed-manager] 2026-01-10 14:30:02.284550 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:30:02.284554 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:30:02.284558 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:30:02.284562 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:30:02.284566 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:30:02.284570 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:30:02.284577 | orchestrator | 2026-01-10 14:30:02.284584 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-10 14:30:02.284605 | orchestrator | Saturday 10 January 2026 14:29:55 +0000 (0:00:01.056) 0:00:01.296 ****** 2026-01-10 14:30:02.284614 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:30:02.284622 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:30:02.284629 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:30:02.284635 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:30:02.284642 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:30:02.284648 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:30:02.284652 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:30:02.284658 | orchestrator | 2026-01-10 14:30:02.284665 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-10 14:30:02.284671 | orchestrator | 2026-01-10 14:30:02.284678 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-10 14:30:02.284684 | orchestrator | Saturday 10 January 2026 14:29:56 +0000 (0:00:01.122) 0:00:02.418 ****** 2026-01-10 14:30:02.284690 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:30:02.284697 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:30:02.284703 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:30:02.284710 | orchestrator | ok: [testbed-manager] 2026-01-10 14:30:02.284716 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:30:02.284724 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:30:02.284730 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:30:02.284736 | orchestrator | 2026-01-10 14:30:02.284743 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-10 14:30:02.284749 | orchestrator | 2026-01-10 14:30:02.284755 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-10 14:30:02.284761 | orchestrator | Saturday 10 January 2026 14:30:01 +0000 (0:00:05.214) 0:00:07.632 ****** 2026-01-10 14:30:02.284767 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:30:02.284774 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:30:02.284780 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:30:02.284787 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:30:02.284793 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:30:02.284799 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:30:02.284805 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:30:02.284811 | orchestrator | 2026-01-10 14:30:02.284818 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:30:02.284825 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:30:02.284833 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:30:02.284839 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:30:02.284845 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:30:02.284852 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:30:02.284857 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:30:02.284864 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:30:02.284876 | orchestrator | 2026-01-10 14:30:02.284883 | orchestrator | 2026-01-10 14:30:02.284889 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:30:02.284896 | orchestrator | Saturday 10 January 2026 14:30:01 +0000 (0:00:00.482) 0:00:08.115 ****** 2026-01-10 14:30:02.284903 | orchestrator | =============================================================================== 2026-01-10 14:30:02.284909 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.21s 2026-01-10 14:30:02.284916 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.12s 2026-01-10 14:30:02.284923 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.06s 2026-01-10 14:30:02.284930 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.48s 2026-01-10 14:30:14.356087 | orchestrator | 2026-01-10 14:30:14 | INFO  | Task 92821fbd-d01c-43db-adbe-07d876225990 (frr) was prepared for execution. 2026-01-10 14:30:14.356184 | orchestrator | 2026-01-10 14:30:14 | INFO  | It takes a moment until task 92821fbd-d01c-43db-adbe-07d876225990 (frr) has been started and output is visible here. 2026-01-10 14:30:40.884685 | orchestrator | 2026-01-10 14:30:40.884790 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-01-10 14:30:40.884803 | orchestrator | 2026-01-10 14:30:40.884810 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-01-10 14:30:40.884818 | orchestrator | Saturday 10 January 2026 14:30:18 +0000 (0:00:00.242) 0:00:00.242 ****** 2026-01-10 14:30:40.884825 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-01-10 14:30:40.884833 | orchestrator | 2026-01-10 14:30:40.884839 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-01-10 14:30:40.884845 | orchestrator | Saturday 10 January 2026 14:30:18 +0000 (0:00:00.237) 0:00:00.479 ****** 2026-01-10 14:30:40.884852 | orchestrator | changed: [testbed-manager] 2026-01-10 14:30:40.884859 | orchestrator | 2026-01-10 14:30:40.884865 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-01-10 14:30:40.884871 | orchestrator | Saturday 10 January 2026 14:30:19 +0000 (0:00:01.236) 0:00:01.715 ****** 2026-01-10 14:30:40.884892 | orchestrator | changed: [testbed-manager] 2026-01-10 14:30:40.884899 | orchestrator | 2026-01-10 14:30:40.884906 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-01-10 14:30:40.884912 | orchestrator | Saturday 10 January 2026 14:30:30 +0000 (0:00:10.797) 0:00:12.513 ****** 2026-01-10 14:30:40.884918 | orchestrator | ok: [testbed-manager] 2026-01-10 14:30:40.884925 | orchestrator | 2026-01-10 14:30:40.884931 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-01-10 14:30:40.884937 | orchestrator | Saturday 10 January 2026 14:30:31 +0000 (0:00:01.017) 0:00:13.531 ****** 2026-01-10 14:30:40.884944 | orchestrator | changed: [testbed-manager] 2026-01-10 14:30:40.884950 | orchestrator | 2026-01-10 14:30:40.884956 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-01-10 14:30:40.884962 | orchestrator | Saturday 10 January 2026 14:30:32 +0000 (0:00:00.986) 0:00:14.517 ****** 2026-01-10 14:30:40.884969 | orchestrator | ok: [testbed-manager] 2026-01-10 14:30:40.884975 | orchestrator | 2026-01-10 14:30:40.884982 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-01-10 14:30:40.884989 | orchestrator | Saturday 10 January 2026 14:30:33 +0000 (0:00:01.186) 0:00:15.703 ****** 2026-01-10 14:30:40.884995 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:30:40.885001 | orchestrator | 2026-01-10 14:30:40.885008 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-01-10 14:30:40.885014 | orchestrator | Saturday 10 January 2026 14:30:34 +0000 (0:00:00.146) 0:00:15.850 ****** 2026-01-10 14:30:40.885021 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:30:40.885048 | orchestrator | 2026-01-10 14:30:40.885055 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-01-10 14:30:40.885062 | orchestrator | Saturday 10 January 2026 14:30:34 +0000 (0:00:00.169) 0:00:16.020 ****** 2026-01-10 14:30:40.885068 | orchestrator | changed: [testbed-manager] 2026-01-10 14:30:40.885074 | orchestrator | 2026-01-10 14:30:40.885080 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-01-10 14:30:40.885086 | orchestrator | Saturday 10 January 2026 14:30:35 +0000 (0:00:01.029) 0:00:17.049 ****** 2026-01-10 14:30:40.885093 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-01-10 14:30:40.885099 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-01-10 14:30:40.885106 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-01-10 14:30:40.885112 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-01-10 14:30:40.885118 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-01-10 14:30:40.885124 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-01-10 14:30:40.885131 | orchestrator | 2026-01-10 14:30:40.885137 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-01-10 14:30:40.885144 | orchestrator | Saturday 10 January 2026 14:30:37 +0000 (0:00:02.286) 0:00:19.336 ****** 2026-01-10 14:30:40.885150 | orchestrator | ok: [testbed-manager] 2026-01-10 14:30:40.885156 | orchestrator | 2026-01-10 14:30:40.885163 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-01-10 14:30:40.885169 | orchestrator | Saturday 10 January 2026 14:30:39 +0000 (0:00:01.637) 0:00:20.974 ****** 2026-01-10 14:30:40.885175 | orchestrator | changed: [testbed-manager] 2026-01-10 14:30:40.885181 | orchestrator | 2026-01-10 14:30:40.885187 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:30:40.885195 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:30:40.885201 | orchestrator | 2026-01-10 14:30:40.885208 | orchestrator | 2026-01-10 14:30:40.885214 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:30:40.885221 | orchestrator | Saturday 10 January 2026 14:30:40 +0000 (0:00:01.415) 0:00:22.390 ****** 2026-01-10 14:30:40.885227 | orchestrator | =============================================================================== 2026-01-10 14:30:40.885234 | orchestrator | osism.services.frr : Install frr package ------------------------------- 10.80s 2026-01-10 14:30:40.885240 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.29s 2026-01-10 14:30:40.885246 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.64s 2026-01-10 14:30:40.885253 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.42s 2026-01-10 14:30:40.885259 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.24s 2026-01-10 14:30:40.885281 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.19s 2026-01-10 14:30:40.885287 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.03s 2026-01-10 14:30:40.885294 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.02s 2026-01-10 14:30:40.885300 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.99s 2026-01-10 14:30:40.885323 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.24s 2026-01-10 14:30:40.885327 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.17s 2026-01-10 14:30:40.885332 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.15s 2026-01-10 14:30:41.198990 | orchestrator | 2026-01-10 14:30:41.201483 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sat Jan 10 14:30:41 UTC 2026 2026-01-10 14:30:41.201535 | orchestrator | 2026-01-10 14:30:43.189852 | orchestrator | 2026-01-10 14:30:43 | INFO  | Collection nutshell is prepared for execution 2026-01-10 14:30:43.189951 | orchestrator | 2026-01-10 14:30:43 | INFO  | A [0] - dotfiles 2026-01-10 14:30:53.203224 | orchestrator | 2026-01-10 14:30:53 | INFO  | A [0] - homer 2026-01-10 14:30:53.203298 | orchestrator | 2026-01-10 14:30:53 | INFO  | A [0] - netdata 2026-01-10 14:30:53.203304 | orchestrator | 2026-01-10 14:30:53 | INFO  | A [0] - openstackclient 2026-01-10 14:30:53.203309 | orchestrator | 2026-01-10 14:30:53 | INFO  | A [0] - phpmyadmin 2026-01-10 14:30:53.203313 | orchestrator | 2026-01-10 14:30:53 | INFO  | A [0] - common 2026-01-10 14:30:53.207249 | orchestrator | 2026-01-10 14:30:53 | INFO  | A [1] -- loadbalancer 2026-01-10 14:30:53.207317 | orchestrator | 2026-01-10 14:30:53 | INFO  | A [2] --- opensearch 2026-01-10 14:30:53.207326 | orchestrator | 2026-01-10 14:30:53 | INFO  | A [2] --- mariadb-ng 2026-01-10 14:30:53.207757 | orchestrator | 2026-01-10 14:30:53 | INFO  | A [3] ---- horizon 2026-01-10 14:30:53.207943 | orchestrator | 2026-01-10 14:30:53 | INFO  | A [3] ---- keystone 2026-01-10 14:30:53.207955 | orchestrator | 2026-01-10 14:30:53 | INFO  | A [4] ----- neutron 2026-01-10 14:30:53.208230 | orchestrator | 2026-01-10 14:30:53 | INFO  | A [5] ------ wait-for-nova 2026-01-10 14:30:53.208557 | orchestrator | 2026-01-10 14:30:53 | INFO  | A [6] ------- octavia 2026-01-10 14:30:53.210277 | orchestrator | 2026-01-10 14:30:53 | INFO  | A [4] ----- barbican 2026-01-10 14:30:53.210531 | orchestrator | 2026-01-10 14:30:53 | INFO  | A [4] ----- designate 2026-01-10 14:30:53.210552 | orchestrator | 2026-01-10 14:30:53 | INFO  | A [4] ----- ironic 2026-01-10 14:30:53.210911 | orchestrator | 2026-01-10 14:30:53 | INFO  | A [4] ----- placement 2026-01-10 14:30:53.210929 | orchestrator | 2026-01-10 14:30:53 | INFO  | A [4] ----- magnum 2026-01-10 14:30:53.211880 | orchestrator | 2026-01-10 14:30:53 | INFO  | A [1] -- openvswitch 2026-01-10 14:30:53.211897 | orchestrator | 2026-01-10 14:30:53 | INFO  | A [2] --- ovn 2026-01-10 14:30:53.212087 | orchestrator | 2026-01-10 14:30:53 | INFO  | A [1] -- memcached 2026-01-10 14:30:53.212367 | orchestrator | 2026-01-10 14:30:53 | INFO  | A [1] -- redis 2026-01-10 14:30:53.212380 | orchestrator | 2026-01-10 14:30:53 | INFO  | A [1] -- rabbitmq-ng 2026-01-10 14:30:53.212779 | orchestrator | 2026-01-10 14:30:53 | INFO  | A [0] - kubernetes 2026-01-10 14:30:53.215536 | orchestrator | 2026-01-10 14:30:53 | INFO  | A [1] -- kubeconfig 2026-01-10 14:30:53.215564 | orchestrator | 2026-01-10 14:30:53 | INFO  | A [1] -- copy-kubeconfig 2026-01-10 14:30:53.216104 | orchestrator | 2026-01-10 14:30:53 | INFO  | A [0] - ceph 2026-01-10 14:30:53.218425 | orchestrator | 2026-01-10 14:30:53 | INFO  | A [1] -- ceph-pools 2026-01-10 14:30:53.218452 | orchestrator | 2026-01-10 14:30:53 | INFO  | A [2] --- copy-ceph-keys 2026-01-10 14:30:53.218688 | orchestrator | 2026-01-10 14:30:53 | INFO  | A [3] ---- cephclient 2026-01-10 14:30:53.218702 | orchestrator | 2026-01-10 14:30:53 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-01-10 14:30:53.218706 | orchestrator | 2026-01-10 14:30:53 | INFO  | A [4] ----- wait-for-keystone 2026-01-10 14:30:53.218829 | orchestrator | 2026-01-10 14:30:53 | INFO  | A [5] ------ kolla-ceph-rgw 2026-01-10 14:30:53.218837 | orchestrator | 2026-01-10 14:30:53 | INFO  | A [5] ------ glance 2026-01-10 14:30:53.219100 | orchestrator | 2026-01-10 14:30:53 | INFO  | A [5] ------ cinder 2026-01-10 14:30:53.219141 | orchestrator | 2026-01-10 14:30:53 | INFO  | A [5] ------ nova 2026-01-10 14:30:53.219506 | orchestrator | 2026-01-10 14:30:53 | INFO  | A [4] ----- prometheus 2026-01-10 14:30:53.219521 | orchestrator | 2026-01-10 14:30:53 | INFO  | A [5] ------ grafana 2026-01-10 14:30:53.439523 | orchestrator | 2026-01-10 14:30:53 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-01-10 14:30:53.439652 | orchestrator | 2026-01-10 14:30:53 | INFO  | Tasks are running in the background 2026-01-10 14:30:56.533516 | orchestrator | 2026-01-10 14:30:56 | INFO  | No task IDs specified, wait for all currently running tasks 2026-01-10 14:30:58.662511 | orchestrator | 2026-01-10 14:30:58 | INFO  | Task cab52d95-5f2d-41b9-a0d5-5be6d3e013fb is in state STARTED 2026-01-10 14:30:58.668507 | orchestrator | 2026-01-10 14:30:58 | INFO  | Task b8fb4d41-ff0c-46b3-b679-202a4773d308 is in state STARTED 2026-01-10 14:30:58.671081 | orchestrator | 2026-01-10 14:30:58 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:30:58.671147 | orchestrator | 2026-01-10 14:30:58 | INFO  | Task 8625e702-d571-47cc-b54a-373b2146b0ac is in state STARTED 2026-01-10 14:30:58.671161 | orchestrator | 2026-01-10 14:30:58 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:30:58.671299 | orchestrator | 2026-01-10 14:30:58 | INFO  | Task 497629a9-58db-437d-96c8-eeaba0322bae is in state STARTED 2026-01-10 14:30:58.674252 | orchestrator | 2026-01-10 14:30:58 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:30:58.674318 | orchestrator | 2026-01-10 14:30:58 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:01.718482 | orchestrator | 2026-01-10 14:31:01 | INFO  | Task cab52d95-5f2d-41b9-a0d5-5be6d3e013fb is in state STARTED 2026-01-10 14:31:01.720071 | orchestrator | 2026-01-10 14:31:01 | INFO  | Task b8fb4d41-ff0c-46b3-b679-202a4773d308 is in state STARTED 2026-01-10 14:31:01.721503 | orchestrator | 2026-01-10 14:31:01 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:31:01.722915 | orchestrator | 2026-01-10 14:31:01 | INFO  | Task 8625e702-d571-47cc-b54a-373b2146b0ac is in state STARTED 2026-01-10 14:31:01.722969 | orchestrator | 2026-01-10 14:31:01 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:31:01.723871 | orchestrator | 2026-01-10 14:31:01 | INFO  | Task 497629a9-58db-437d-96c8-eeaba0322bae is in state STARTED 2026-01-10 14:31:01.725957 | orchestrator | 2026-01-10 14:31:01 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:31:01.726006 | orchestrator | 2026-01-10 14:31:01 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:04.756104 | orchestrator | 2026-01-10 14:31:04 | INFO  | Task cab52d95-5f2d-41b9-a0d5-5be6d3e013fb is in state STARTED 2026-01-10 14:31:04.756177 | orchestrator | 2026-01-10 14:31:04 | INFO  | Task b8fb4d41-ff0c-46b3-b679-202a4773d308 is in state STARTED 2026-01-10 14:31:04.756676 | orchestrator | 2026-01-10 14:31:04 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:31:04.757256 | orchestrator | 2026-01-10 14:31:04 | INFO  | Task 8625e702-d571-47cc-b54a-373b2146b0ac is in state STARTED 2026-01-10 14:31:04.758960 | orchestrator | 2026-01-10 14:31:04 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:31:04.759534 | orchestrator | 2026-01-10 14:31:04 | INFO  | Task 497629a9-58db-437d-96c8-eeaba0322bae is in state STARTED 2026-01-10 14:31:04.761067 | orchestrator | 2026-01-10 14:31:04 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:31:04.761129 | orchestrator | 2026-01-10 14:31:04 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:07.957683 | orchestrator | 2026-01-10 14:31:07 | INFO  | Task cab52d95-5f2d-41b9-a0d5-5be6d3e013fb is in state STARTED 2026-01-10 14:31:07.960662 | orchestrator | 2026-01-10 14:31:07 | INFO  | Task b8fb4d41-ff0c-46b3-b679-202a4773d308 is in state STARTED 2026-01-10 14:31:07.966928 | orchestrator | 2026-01-10 14:31:07 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:31:07.969323 | orchestrator | 2026-01-10 14:31:07 | INFO  | Task 8625e702-d571-47cc-b54a-373b2146b0ac is in state STARTED 2026-01-10 14:31:07.972389 | orchestrator | 2026-01-10 14:31:07 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:31:07.972835 | orchestrator | 2026-01-10 14:31:07 | INFO  | Task 497629a9-58db-437d-96c8-eeaba0322bae is in state STARTED 2026-01-10 14:31:07.976252 | orchestrator | 2026-01-10 14:31:07 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:31:07.976302 | orchestrator | 2026-01-10 14:31:07 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:11.183943 | orchestrator | 2026-01-10 14:31:11 | INFO  | Task cab52d95-5f2d-41b9-a0d5-5be6d3e013fb is in state STARTED 2026-01-10 14:31:11.183993 | orchestrator | 2026-01-10 14:31:11 | INFO  | Task b8fb4d41-ff0c-46b3-b679-202a4773d308 is in state STARTED 2026-01-10 14:31:11.183999 | orchestrator | 2026-01-10 14:31:11 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:31:11.184003 | orchestrator | 2026-01-10 14:31:11 | INFO  | Task 8625e702-d571-47cc-b54a-373b2146b0ac is in state STARTED 2026-01-10 14:31:11.184006 | orchestrator | 2026-01-10 14:31:11 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:31:11.184010 | orchestrator | 2026-01-10 14:31:11 | INFO  | Task 497629a9-58db-437d-96c8-eeaba0322bae is in state STARTED 2026-01-10 14:31:11.184013 | orchestrator | 2026-01-10 14:31:11 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:31:11.184016 | orchestrator | 2026-01-10 14:31:11 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:14.266302 | orchestrator | 2026-01-10 14:31:14 | INFO  | Task cab52d95-5f2d-41b9-a0d5-5be6d3e013fb is in state STARTED 2026-01-10 14:31:14.266370 | orchestrator | 2026-01-10 14:31:14 | INFO  | Task b8fb4d41-ff0c-46b3-b679-202a4773d308 is in state STARTED 2026-01-10 14:31:14.266380 | orchestrator | 2026-01-10 14:31:14 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:31:14.266387 | orchestrator | 2026-01-10 14:31:14 | INFO  | Task 8625e702-d571-47cc-b54a-373b2146b0ac is in state STARTED 2026-01-10 14:31:14.266394 | orchestrator | 2026-01-10 14:31:14 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:31:14.266401 | orchestrator | 2026-01-10 14:31:14 | INFO  | Task 497629a9-58db-437d-96c8-eeaba0322bae is in state STARTED 2026-01-10 14:31:14.266407 | orchestrator | 2026-01-10 14:31:14 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:31:14.266414 | orchestrator | 2026-01-10 14:31:14 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:17.189180 | orchestrator | 2026-01-10 14:31:17 | INFO  | Task cab52d95-5f2d-41b9-a0d5-5be6d3e013fb is in state STARTED 2026-01-10 14:31:17.194478 | orchestrator | 2026-01-10 14:31:17 | INFO  | Task b8fb4d41-ff0c-46b3-b679-202a4773d308 is in state STARTED 2026-01-10 14:31:17.203251 | orchestrator | 2026-01-10 14:31:17 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:31:17.210966 | orchestrator | 2026-01-10 14:31:17 | INFO  | Task 8625e702-d571-47cc-b54a-373b2146b0ac is in state STARTED 2026-01-10 14:31:17.217339 | orchestrator | 2026-01-10 14:31:17 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:31:17.222136 | orchestrator | 2026-01-10 14:31:17 | INFO  | Task 497629a9-58db-437d-96c8-eeaba0322bae is in state STARTED 2026-01-10 14:31:17.228683 | orchestrator | 2026-01-10 14:31:17 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:31:17.228734 | orchestrator | 2026-01-10 14:31:17 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:20.393314 | orchestrator | 2026-01-10 14:31:20 | INFO  | Task cab52d95-5f2d-41b9-a0d5-5be6d3e013fb is in state STARTED 2026-01-10 14:31:20.393888 | orchestrator | 2026-01-10 14:31:20.393935 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-01-10 14:31:20.393945 | orchestrator | 2026-01-10 14:31:20.393953 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-01-10 14:31:20.393958 | orchestrator | Saturday 10 January 2026 14:31:06 +0000 (0:00:01.066) 0:00:01.066 ****** 2026-01-10 14:31:20.393962 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:31:20.393966 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:31:20.393970 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:31:20.393974 | orchestrator | changed: [testbed-manager] 2026-01-10 14:31:20.393978 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:31:20.393981 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:31:20.393985 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:31:20.393989 | orchestrator | 2026-01-10 14:31:20.393993 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-01-10 14:31:20.393996 | orchestrator | Saturday 10 January 2026 14:31:10 +0000 (0:00:03.421) 0:00:04.488 ****** 2026-01-10 14:31:20.394000 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-01-10 14:31:20.394004 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-01-10 14:31:20.394008 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-01-10 14:31:20.394040 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-01-10 14:31:20.394044 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-01-10 14:31:20.394048 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-01-10 14:31:20.394052 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-01-10 14:31:20.394056 | orchestrator | 2026-01-10 14:31:20.394060 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-01-10 14:31:20.394064 | orchestrator | Saturday 10 January 2026 14:31:12 +0000 (0:00:02.694) 0:00:07.183 ****** 2026-01-10 14:31:20.394070 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-10 14:31:11.097509', 'end': '2026-01-10 14:31:11.105115', 'delta': '0:00:00.007606', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-10 14:31:20.394082 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-10 14:31:11.099602', 'end': '2026-01-10 14:31:11.103685', 'delta': '0:00:00.004083', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-10 14:31:20.394100 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-10 14:31:11.323101', 'end': '2026-01-10 14:31:11.327672', 'delta': '0:00:00.004571', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-10 14:31:20.394116 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-10 14:31:12.075136', 'end': '2026-01-10 14:31:12.079160', 'delta': '0:00:00.004024', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-10 14:31:20.394121 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-10 14:31:12.542565', 'end': '2026-01-10 14:31:12.548513', 'delta': '0:00:00.005948', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-10 14:31:20.394125 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-10 14:31:12.712181', 'end': '2026-01-10 14:31:12.716962', 'delta': '0:00:00.004781', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-10 14:31:20.394131 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-10 14:31:12.768698', 'end': '2026-01-10 14:31:12.773851', 'delta': '0:00:00.005153', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-10 14:31:20.394141 | orchestrator | 2026-01-10 14:31:20.394145 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-01-10 14:31:20.394149 | orchestrator | Saturday 10 January 2026 14:31:15 +0000 (0:00:02.152) 0:00:09.335 ****** 2026-01-10 14:31:20.394153 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-01-10 14:31:20.394157 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-01-10 14:31:20.394160 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-01-10 14:31:20.394164 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-01-10 14:31:20.394168 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-01-10 14:31:20.394172 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-01-10 14:31:20.394175 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-01-10 14:31:20.394179 | orchestrator | 2026-01-10 14:31:20.394183 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-01-10 14:31:20.394187 | orchestrator | Saturday 10 January 2026 14:31:16 +0000 (0:00:01.742) 0:00:11.078 ****** 2026-01-10 14:31:20.394191 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-01-10 14:31:20.394195 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-01-10 14:31:20.394198 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-01-10 14:31:20.394202 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-01-10 14:31:20.394206 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-01-10 14:31:20.394210 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-01-10 14:31:20.394213 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-01-10 14:31:20.394217 | orchestrator | 2026-01-10 14:31:20.394221 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:31:20.394228 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:31:20.394233 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:31:20.394237 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:31:20.394241 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:31:20.394245 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:31:20.394249 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:31:20.394253 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:31:20.394257 | orchestrator | 2026-01-10 14:31:20.394261 | orchestrator | 2026-01-10 14:31:20.394264 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:31:20.394269 | orchestrator | Saturday 10 January 2026 14:31:18 +0000 (0:00:02.033) 0:00:13.111 ****** 2026-01-10 14:31:20.394275 | orchestrator | =============================================================================== 2026-01-10 14:31:20.394284 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.42s 2026-01-10 14:31:20.394288 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.69s 2026-01-10 14:31:20.394292 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.15s 2026-01-10 14:31:20.394296 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.03s 2026-01-10 14:31:20.394299 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.74s 2026-01-10 14:31:20.394303 | orchestrator | 2026-01-10 14:31:20 | INFO  | Task b8fb4d41-ff0c-46b3-b679-202a4773d308 is in state SUCCESS 2026-01-10 14:31:20.400273 | orchestrator | 2026-01-10 14:31:20 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:31:20.406298 | orchestrator | 2026-01-10 14:31:20 | INFO  | Task 88e355cf-1492-4257-ac15-292089db7a24 is in state STARTED 2026-01-10 14:31:20.409639 | orchestrator | 2026-01-10 14:31:20 | INFO  | Task 8625e702-d571-47cc-b54a-373b2146b0ac is in state STARTED 2026-01-10 14:31:20.420793 | orchestrator | 2026-01-10 14:31:20 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:31:20.425601 | orchestrator | 2026-01-10 14:31:20 | INFO  | Task 497629a9-58db-437d-96c8-eeaba0322bae is in state STARTED 2026-01-10 14:31:20.430840 | orchestrator | 2026-01-10 14:31:20 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:31:20.430883 | orchestrator | 2026-01-10 14:31:20 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:23.508740 | orchestrator | 2026-01-10 14:31:23 | INFO  | Task cab52d95-5f2d-41b9-a0d5-5be6d3e013fb is in state STARTED 2026-01-10 14:31:23.509307 | orchestrator | 2026-01-10 14:31:23 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:31:23.512210 | orchestrator | 2026-01-10 14:31:23 | INFO  | Task 88e355cf-1492-4257-ac15-292089db7a24 is in state STARTED 2026-01-10 14:31:23.513255 | orchestrator | 2026-01-10 14:31:23 | INFO  | Task 8625e702-d571-47cc-b54a-373b2146b0ac is in state STARTED 2026-01-10 14:31:23.515254 | orchestrator | 2026-01-10 14:31:23 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:31:23.517418 | orchestrator | 2026-01-10 14:31:23 | INFO  | Task 497629a9-58db-437d-96c8-eeaba0322bae is in state STARTED 2026-01-10 14:31:23.521455 | orchestrator | 2026-01-10 14:31:23 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:31:23.521502 | orchestrator | 2026-01-10 14:31:23 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:26.579616 | orchestrator | 2026-01-10 14:31:26 | INFO  | Task cab52d95-5f2d-41b9-a0d5-5be6d3e013fb is in state STARTED 2026-01-10 14:31:26.579688 | orchestrator | 2026-01-10 14:31:26 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:31:26.579696 | orchestrator | 2026-01-10 14:31:26 | INFO  | Task 88e355cf-1492-4257-ac15-292089db7a24 is in state STARTED 2026-01-10 14:31:26.579701 | orchestrator | 2026-01-10 14:31:26 | INFO  | Task 8625e702-d571-47cc-b54a-373b2146b0ac is in state STARTED 2026-01-10 14:31:26.579706 | orchestrator | 2026-01-10 14:31:26 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:31:26.579711 | orchestrator | 2026-01-10 14:31:26 | INFO  | Task 497629a9-58db-437d-96c8-eeaba0322bae is in state STARTED 2026-01-10 14:31:26.579716 | orchestrator | 2026-01-10 14:31:26 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:31:26.579721 | orchestrator | 2026-01-10 14:31:26 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:29.684723 | orchestrator | 2026-01-10 14:31:29 | INFO  | Task cab52d95-5f2d-41b9-a0d5-5be6d3e013fb is in state STARTED 2026-01-10 14:31:29.684784 | orchestrator | 2026-01-10 14:31:29 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:31:29.684793 | orchestrator | 2026-01-10 14:31:29 | INFO  | Task 88e355cf-1492-4257-ac15-292089db7a24 is in state STARTED 2026-01-10 14:31:29.687326 | orchestrator | 2026-01-10 14:31:29 | INFO  | Task 8625e702-d571-47cc-b54a-373b2146b0ac is in state STARTED 2026-01-10 14:31:29.687773 | orchestrator | 2026-01-10 14:31:29 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:31:29.688383 | orchestrator | 2026-01-10 14:31:29 | INFO  | Task 497629a9-58db-437d-96c8-eeaba0322bae is in state STARTED 2026-01-10 14:31:29.688921 | orchestrator | 2026-01-10 14:31:29 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:31:29.688949 | orchestrator | 2026-01-10 14:31:29 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:32.773401 | orchestrator | 2026-01-10 14:31:32 | INFO  | Task cab52d95-5f2d-41b9-a0d5-5be6d3e013fb is in state STARTED 2026-01-10 14:31:32.775183 | orchestrator | 2026-01-10 14:31:32 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:31:32.777333 | orchestrator | 2026-01-10 14:31:32 | INFO  | Task 88e355cf-1492-4257-ac15-292089db7a24 is in state STARTED 2026-01-10 14:31:32.785227 | orchestrator | 2026-01-10 14:31:32 | INFO  | Task 8625e702-d571-47cc-b54a-373b2146b0ac is in state STARTED 2026-01-10 14:31:32.785295 | orchestrator | 2026-01-10 14:31:32 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:31:32.785301 | orchestrator | 2026-01-10 14:31:32 | INFO  | Task 497629a9-58db-437d-96c8-eeaba0322bae is in state STARTED 2026-01-10 14:31:32.785305 | orchestrator | 2026-01-10 14:31:32 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:31:32.785310 | orchestrator | 2026-01-10 14:31:32 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:35.837939 | orchestrator | 2026-01-10 14:31:35 | INFO  | Task cab52d95-5f2d-41b9-a0d5-5be6d3e013fb is in state STARTED 2026-01-10 14:31:35.839091 | orchestrator | 2026-01-10 14:31:35 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:31:35.840478 | orchestrator | 2026-01-10 14:31:35 | INFO  | Task 88e355cf-1492-4257-ac15-292089db7a24 is in state STARTED 2026-01-10 14:31:35.840883 | orchestrator | 2026-01-10 14:31:35 | INFO  | Task 8625e702-d571-47cc-b54a-373b2146b0ac is in state STARTED 2026-01-10 14:31:35.841793 | orchestrator | 2026-01-10 14:31:35 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:31:35.842314 | orchestrator | 2026-01-10 14:31:35 | INFO  | Task 497629a9-58db-437d-96c8-eeaba0322bae is in state STARTED 2026-01-10 14:31:35.844003 | orchestrator | 2026-01-10 14:31:35 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:31:35.844043 | orchestrator | 2026-01-10 14:31:35 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:38.976648 | orchestrator | 2026-01-10 14:31:38 | INFO  | Task cab52d95-5f2d-41b9-a0d5-5be6d3e013fb is in state STARTED 2026-01-10 14:31:38.976726 | orchestrator | 2026-01-10 14:31:38 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:31:38.976733 | orchestrator | 2026-01-10 14:31:38 | INFO  | Task 88e355cf-1492-4257-ac15-292089db7a24 is in state STARTED 2026-01-10 14:31:38.976737 | orchestrator | 2026-01-10 14:31:38 | INFO  | Task 8625e702-d571-47cc-b54a-373b2146b0ac is in state STARTED 2026-01-10 14:31:38.976759 | orchestrator | 2026-01-10 14:31:38 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:31:38.976936 | orchestrator | 2026-01-10 14:31:38 | INFO  | Task 497629a9-58db-437d-96c8-eeaba0322bae is in state STARTED 2026-01-10 14:31:38.976946 | orchestrator | 2026-01-10 14:31:38 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:31:38.976950 | orchestrator | 2026-01-10 14:31:38 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:42.013042 | orchestrator | 2026-01-10 14:31:41 | INFO  | Task cab52d95-5f2d-41b9-a0d5-5be6d3e013fb is in state STARTED 2026-01-10 14:31:42.013106 | orchestrator | 2026-01-10 14:31:41 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:31:42.013113 | orchestrator | 2026-01-10 14:31:41 | INFO  | Task 88e355cf-1492-4257-ac15-292089db7a24 is in state STARTED 2026-01-10 14:31:42.013119 | orchestrator | 2026-01-10 14:31:42 | INFO  | Task 8625e702-d571-47cc-b54a-373b2146b0ac is in state STARTED 2026-01-10 14:31:42.013124 | orchestrator | 2026-01-10 14:31:42 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:31:42.013128 | orchestrator | 2026-01-10 14:31:42 | INFO  | Task 497629a9-58db-437d-96c8-eeaba0322bae is in state STARTED 2026-01-10 14:31:42.013144 | orchestrator | 2026-01-10 14:31:42 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:31:42.013153 | orchestrator | 2026-01-10 14:31:42 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:45.049111 | orchestrator | 2026-01-10 14:31:45 | INFO  | Task cab52d95-5f2d-41b9-a0d5-5be6d3e013fb is in state STARTED 2026-01-10 14:31:45.049201 | orchestrator | 2026-01-10 14:31:45 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:31:45.049591 | orchestrator | 2026-01-10 14:31:45 | INFO  | Task 88e355cf-1492-4257-ac15-292089db7a24 is in state STARTED 2026-01-10 14:31:45.050240 | orchestrator | 2026-01-10 14:31:45 | INFO  | Task 8625e702-d571-47cc-b54a-373b2146b0ac is in state SUCCESS 2026-01-10 14:31:45.051071 | orchestrator | 2026-01-10 14:31:45 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:31:45.051908 | orchestrator | 2026-01-10 14:31:45 | INFO  | Task 497629a9-58db-437d-96c8-eeaba0322bae is in state STARTED 2026-01-10 14:31:45.052658 | orchestrator | 2026-01-10 14:31:45 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:31:45.052721 | orchestrator | 2026-01-10 14:31:45 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:48.135834 | orchestrator | 2026-01-10 14:31:48 | INFO  | Task cab52d95-5f2d-41b9-a0d5-5be6d3e013fb is in state STARTED 2026-01-10 14:31:48.135890 | orchestrator | 2026-01-10 14:31:48 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:31:48.135895 | orchestrator | 2026-01-10 14:31:48 | INFO  | Task 88e355cf-1492-4257-ac15-292089db7a24 is in state STARTED 2026-01-10 14:31:48.135899 | orchestrator | 2026-01-10 14:31:48 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:31:48.135903 | orchestrator | 2026-01-10 14:31:48 | INFO  | Task 497629a9-58db-437d-96c8-eeaba0322bae is in state STARTED 2026-01-10 14:31:48.135907 | orchestrator | 2026-01-10 14:31:48 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:31:48.135911 | orchestrator | 2026-01-10 14:31:48 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:51.280972 | orchestrator | 2026-01-10 14:31:51 | INFO  | Task cab52d95-5f2d-41b9-a0d5-5be6d3e013fb is in state STARTED 2026-01-10 14:31:51.281028 | orchestrator | 2026-01-10 14:31:51 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:31:51.281055 | orchestrator | 2026-01-10 14:31:51 | INFO  | Task 88e355cf-1492-4257-ac15-292089db7a24 is in state STARTED 2026-01-10 14:31:51.281063 | orchestrator | 2026-01-10 14:31:51 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:31:51.281069 | orchestrator | 2026-01-10 14:31:51 | INFO  | Task 497629a9-58db-437d-96c8-eeaba0322bae is in state STARTED 2026-01-10 14:31:51.281075 | orchestrator | 2026-01-10 14:31:51 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:31:51.281082 | orchestrator | 2026-01-10 14:31:51 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:54.264076 | orchestrator | 2026-01-10 14:31:54 | INFO  | Task cab52d95-5f2d-41b9-a0d5-5be6d3e013fb is in state SUCCESS 2026-01-10 14:31:54.264171 | orchestrator | 2026-01-10 14:31:54 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:31:54.264183 | orchestrator | 2026-01-10 14:31:54 | INFO  | Task 88e355cf-1492-4257-ac15-292089db7a24 is in state STARTED 2026-01-10 14:31:54.264191 | orchestrator | 2026-01-10 14:31:54 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:31:54.264199 | orchestrator | 2026-01-10 14:31:54 | INFO  | Task 497629a9-58db-437d-96c8-eeaba0322bae is in state STARTED 2026-01-10 14:31:54.264205 | orchestrator | 2026-01-10 14:31:54 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:31:54.264213 | orchestrator | 2026-01-10 14:31:54 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:57.258249 | orchestrator | 2026-01-10 14:31:57 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:31:57.259174 | orchestrator | 2026-01-10 14:31:57 | INFO  | Task 88e355cf-1492-4257-ac15-292089db7a24 is in state STARTED 2026-01-10 14:31:57.260865 | orchestrator | 2026-01-10 14:31:57 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:31:57.261820 | orchestrator | 2026-01-10 14:31:57 | INFO  | Task 497629a9-58db-437d-96c8-eeaba0322bae is in state STARTED 2026-01-10 14:31:57.262949 | orchestrator | 2026-01-10 14:31:57 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:31:57.263205 | orchestrator | 2026-01-10 14:31:57 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:00.307622 | orchestrator | 2026-01-10 14:32:00 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:32:00.307973 | orchestrator | 2026-01-10 14:32:00 | INFO  | Task 88e355cf-1492-4257-ac15-292089db7a24 is in state STARTED 2026-01-10 14:32:00.310477 | orchestrator | 2026-01-10 14:32:00 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:32:00.311657 | orchestrator | 2026-01-10 14:32:00 | INFO  | Task 497629a9-58db-437d-96c8-eeaba0322bae is in state STARTED 2026-01-10 14:32:00.316561 | orchestrator | 2026-01-10 14:32:00 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:32:00.316626 | orchestrator | 2026-01-10 14:32:00 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:03.360154 | orchestrator | 2026-01-10 14:32:03 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:32:03.363017 | orchestrator | 2026-01-10 14:32:03 | INFO  | Task 88e355cf-1492-4257-ac15-292089db7a24 is in state STARTED 2026-01-10 14:32:03.376170 | orchestrator | 2026-01-10 14:32:03 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:32:03.378754 | orchestrator | 2026-01-10 14:32:03 | INFO  | Task 497629a9-58db-437d-96c8-eeaba0322bae is in state STARTED 2026-01-10 14:32:03.382510 | orchestrator | 2026-01-10 14:32:03 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:32:03.383895 | orchestrator | 2026-01-10 14:32:03 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:06.520072 | orchestrator | 2026-01-10 14:32:06 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:32:06.520856 | orchestrator | 2026-01-10 14:32:06 | INFO  | Task 88e355cf-1492-4257-ac15-292089db7a24 is in state STARTED 2026-01-10 14:32:06.523200 | orchestrator | 2026-01-10 14:32:06 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:32:06.524311 | orchestrator | 2026-01-10 14:32:06 | INFO  | Task 497629a9-58db-437d-96c8-eeaba0322bae is in state STARTED 2026-01-10 14:32:06.525601 | orchestrator | 2026-01-10 14:32:06 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:32:06.525653 | orchestrator | 2026-01-10 14:32:06 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:09.566400 | orchestrator | 2026-01-10 14:32:09 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:32:09.570893 | orchestrator | 2026-01-10 14:32:09 | INFO  | Task 88e355cf-1492-4257-ac15-292089db7a24 is in state STARTED 2026-01-10 14:32:09.573154 | orchestrator | 2026-01-10 14:32:09 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:32:09.575472 | orchestrator | 2026-01-10 14:32:09 | INFO  | Task 497629a9-58db-437d-96c8-eeaba0322bae is in state STARTED 2026-01-10 14:32:09.576530 | orchestrator | 2026-01-10 14:32:09 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:32:09.576587 | orchestrator | 2026-01-10 14:32:09 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:12.622040 | orchestrator | 2026-01-10 14:32:12 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:32:12.624089 | orchestrator | 2026-01-10 14:32:12 | INFO  | Task 88e355cf-1492-4257-ac15-292089db7a24 is in state STARTED 2026-01-10 14:32:12.626345 | orchestrator | 2026-01-10 14:32:12 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:32:12.627262 | orchestrator | 2026-01-10 14:32:12 | INFO  | Task 497629a9-58db-437d-96c8-eeaba0322bae is in state STARTED 2026-01-10 14:32:12.628162 | orchestrator | 2026-01-10 14:32:12 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:32:12.628180 | orchestrator | 2026-01-10 14:32:12 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:15.664693 | orchestrator | 2026-01-10 14:32:15 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:32:15.667215 | orchestrator | 2026-01-10 14:32:15 | INFO  | Task 88e355cf-1492-4257-ac15-292089db7a24 is in state STARTED 2026-01-10 14:32:15.668337 | orchestrator | 2026-01-10 14:32:15 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:32:15.669600 | orchestrator | 2026-01-10 14:32:15 | INFO  | Task 497629a9-58db-437d-96c8-eeaba0322bae is in state STARTED 2026-01-10 14:32:15.671162 | orchestrator | 2026-01-10 14:32:15 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:32:15.671192 | orchestrator | 2026-01-10 14:32:15 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:18.725503 | orchestrator | 2026-01-10 14:32:18 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:32:18.732821 | orchestrator | 2026-01-10 14:32:18 | INFO  | Task 88e355cf-1492-4257-ac15-292089db7a24 is in state STARTED 2026-01-10 14:32:18.732942 | orchestrator | 2026-01-10 14:32:18 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:32:18.732955 | orchestrator | 2026-01-10 14:32:18 | INFO  | Task 497629a9-58db-437d-96c8-eeaba0322bae is in state STARTED 2026-01-10 14:32:18.732965 | orchestrator | 2026-01-10 14:32:18 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:32:18.732976 | orchestrator | 2026-01-10 14:32:18 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:21.787496 | orchestrator | 2026-01-10 14:32:21 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:32:21.787591 | orchestrator | 2026-01-10 14:32:21 | INFO  | Task 88e355cf-1492-4257-ac15-292089db7a24 is in state STARTED 2026-01-10 14:32:21.790690 | orchestrator | 2026-01-10 14:32:21 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:32:21.792916 | orchestrator | 2026-01-10 14:32:21 | INFO  | Task 497629a9-58db-437d-96c8-eeaba0322bae is in state STARTED 2026-01-10 14:32:21.792961 | orchestrator | 2026-01-10 14:32:21 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:32:21.792972 | orchestrator | 2026-01-10 14:32:21 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:24.857623 | orchestrator | 2026-01-10 14:32:24 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:32:24.857965 | orchestrator | 2026-01-10 14:32:24 | INFO  | Task 88e355cf-1492-4257-ac15-292089db7a24 is in state SUCCESS 2026-01-10 14:32:24.858879 | orchestrator | 2026-01-10 14:32:24.858908 | orchestrator | 2026-01-10 14:32:24.858914 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-01-10 14:32:24.858919 | orchestrator | 2026-01-10 14:32:24.858923 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-01-10 14:32:24.858929 | orchestrator | Saturday 10 January 2026 14:31:05 +0000 (0:00:00.523) 0:00:00.523 ****** 2026-01-10 14:32:24.858934 | orchestrator | ok: [testbed-manager] => { 2026-01-10 14:32:24.858940 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-01-10 14:32:24.858945 | orchestrator | } 2026-01-10 14:32:24.858950 | orchestrator | 2026-01-10 14:32:24.858954 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-01-10 14:32:24.858958 | orchestrator | Saturday 10 January 2026 14:31:05 +0000 (0:00:00.444) 0:00:00.975 ****** 2026-01-10 14:32:24.858963 | orchestrator | ok: [testbed-manager] 2026-01-10 14:32:24.858972 | orchestrator | 2026-01-10 14:32:24.858978 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-01-10 14:32:24.858984 | orchestrator | Saturday 10 January 2026 14:31:07 +0000 (0:00:01.660) 0:00:02.636 ****** 2026-01-10 14:32:24.858990 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-01-10 14:32:24.858998 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-01-10 14:32:24.859007 | orchestrator | 2026-01-10 14:32:24.859014 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-01-10 14:32:24.859020 | orchestrator | Saturday 10 January 2026 14:31:09 +0000 (0:00:01.954) 0:00:04.590 ****** 2026-01-10 14:32:24.859027 | orchestrator | changed: [testbed-manager] 2026-01-10 14:32:24.859034 | orchestrator | 2026-01-10 14:32:24.859041 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-01-10 14:32:24.859047 | orchestrator | Saturday 10 January 2026 14:31:11 +0000 (0:00:02.363) 0:00:06.954 ****** 2026-01-10 14:32:24.859054 | orchestrator | changed: [testbed-manager] 2026-01-10 14:32:24.859061 | orchestrator | 2026-01-10 14:32:24.859067 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-01-10 14:32:24.859074 | orchestrator | Saturday 10 January 2026 14:31:12 +0000 (0:00:01.066) 0:00:08.020 ****** 2026-01-10 14:32:24.859100 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-01-10 14:32:24.859109 | orchestrator | ok: [testbed-manager] 2026-01-10 14:32:24.859116 | orchestrator | 2026-01-10 14:32:24.859123 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-01-10 14:32:24.859129 | orchestrator | Saturday 10 January 2026 14:31:38 +0000 (0:00:25.755) 0:00:33.776 ****** 2026-01-10 14:32:24.859136 | orchestrator | changed: [testbed-manager] 2026-01-10 14:32:24.859141 | orchestrator | 2026-01-10 14:32:24.859146 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:32:24.859151 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:32:24.859157 | orchestrator | 2026-01-10 14:32:24.859161 | orchestrator | 2026-01-10 14:32:24.859165 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:32:24.859170 | orchestrator | Saturday 10 January 2026 14:31:42 +0000 (0:00:03.465) 0:00:37.241 ****** 2026-01-10 14:32:24.859184 | orchestrator | =============================================================================== 2026-01-10 14:32:24.859188 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.76s 2026-01-10 14:32:24.859192 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.47s 2026-01-10 14:32:24.859196 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.36s 2026-01-10 14:32:24.859200 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.95s 2026-01-10 14:32:24.859204 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.66s 2026-01-10 14:32:24.859224 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.07s 2026-01-10 14:32:24.859229 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.45s 2026-01-10 14:32:24.859233 | orchestrator | 2026-01-10 14:32:24.859237 | orchestrator | 2026-01-10 14:32:24.859241 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-01-10 14:32:24.859245 | orchestrator | 2026-01-10 14:32:24.859249 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-01-10 14:32:24.859253 | orchestrator | Saturday 10 January 2026 14:31:06 +0000 (0:00:01.097) 0:00:01.097 ****** 2026-01-10 14:32:24.859258 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-01-10 14:32:24.859264 | orchestrator | 2026-01-10 14:32:24.859268 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-01-10 14:32:24.859272 | orchestrator | Saturday 10 January 2026 14:31:07 +0000 (0:00:00.763) 0:00:01.860 ****** 2026-01-10 14:32:24.859276 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-01-10 14:32:24.859281 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-01-10 14:32:24.859285 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-01-10 14:32:24.859290 | orchestrator | 2026-01-10 14:32:24.859294 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-01-10 14:32:24.859299 | orchestrator | Saturday 10 January 2026 14:31:10 +0000 (0:00:03.024) 0:00:04.885 ****** 2026-01-10 14:32:24.859303 | orchestrator | changed: [testbed-manager] 2026-01-10 14:32:24.859308 | orchestrator | 2026-01-10 14:32:24.859312 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-01-10 14:32:24.859317 | orchestrator | Saturday 10 January 2026 14:31:12 +0000 (0:00:02.312) 0:00:07.198 ****** 2026-01-10 14:32:24.859332 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-01-10 14:32:24.859338 | orchestrator | ok: [testbed-manager] 2026-01-10 14:32:24.859342 | orchestrator | 2026-01-10 14:32:24.859347 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-01-10 14:32:24.859357 | orchestrator | Saturday 10 January 2026 14:31:45 +0000 (0:00:33.025) 0:00:40.223 ****** 2026-01-10 14:32:24.859362 | orchestrator | changed: [testbed-manager] 2026-01-10 14:32:24.859366 | orchestrator | 2026-01-10 14:32:24.859371 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-01-10 14:32:24.859375 | orchestrator | Saturday 10 January 2026 14:31:47 +0000 (0:00:01.086) 0:00:41.310 ****** 2026-01-10 14:32:24.859380 | orchestrator | ok: [testbed-manager] 2026-01-10 14:32:24.859384 | orchestrator | 2026-01-10 14:32:24.859388 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-01-10 14:32:24.859393 | orchestrator | Saturday 10 January 2026 14:31:47 +0000 (0:00:00.581) 0:00:41.892 ****** 2026-01-10 14:32:24.859397 | orchestrator | changed: [testbed-manager] 2026-01-10 14:32:24.859402 | orchestrator | 2026-01-10 14:32:24.859406 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-01-10 14:32:24.859411 | orchestrator | Saturday 10 January 2026 14:31:49 +0000 (0:00:02.167) 0:00:44.060 ****** 2026-01-10 14:32:24.859415 | orchestrator | changed: [testbed-manager] 2026-01-10 14:32:24.859420 | orchestrator | 2026-01-10 14:32:24.859424 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-01-10 14:32:24.859429 | orchestrator | Saturday 10 January 2026 14:31:51 +0000 (0:00:01.925) 0:00:45.985 ****** 2026-01-10 14:32:24.859433 | orchestrator | changed: [testbed-manager] 2026-01-10 14:32:24.859438 | orchestrator | 2026-01-10 14:32:24.859442 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-01-10 14:32:24.859447 | orchestrator | Saturday 10 January 2026 14:31:52 +0000 (0:00:00.929) 0:00:46.914 ****** 2026-01-10 14:32:24.859451 | orchestrator | ok: [testbed-manager] 2026-01-10 14:32:24.859456 | orchestrator | 2026-01-10 14:32:24.859460 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:32:24.859465 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:32:24.859469 | orchestrator | 2026-01-10 14:32:24.859474 | orchestrator | 2026-01-10 14:32:24.859478 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:32:24.859483 | orchestrator | Saturday 10 January 2026 14:31:53 +0000 (0:00:00.437) 0:00:47.352 ****** 2026-01-10 14:32:24.859487 | orchestrator | =============================================================================== 2026-01-10 14:32:24.859492 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 33.03s 2026-01-10 14:32:24.859496 | orchestrator | osism.services.openstackclient : Create required directories ------------ 3.02s 2026-01-10 14:32:24.859501 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.31s 2026-01-10 14:32:24.859507 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.17s 2026-01-10 14:32:24.859515 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.93s 2026-01-10 14:32:24.859523 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.09s 2026-01-10 14:32:24.859540 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.93s 2026-01-10 14:32:24.859547 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.76s 2026-01-10 14:32:24.859554 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.58s 2026-01-10 14:32:24.859561 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.44s 2026-01-10 14:32:24.859568 | orchestrator | 2026-01-10 14:32:24.859575 | orchestrator | 2026-01-10 14:32:24.859586 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-01-10 14:32:24.859593 | orchestrator | 2026-01-10 14:32:24.859600 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-01-10 14:32:24.859607 | orchestrator | Saturday 10 January 2026 14:31:24 +0000 (0:00:00.250) 0:00:00.250 ****** 2026-01-10 14:32:24.859615 | orchestrator | ok: [testbed-manager] 2026-01-10 14:32:24.859621 | orchestrator | 2026-01-10 14:32:24.859644 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-01-10 14:32:24.859651 | orchestrator | Saturday 10 January 2026 14:31:25 +0000 (0:00:00.963) 0:00:01.213 ****** 2026-01-10 14:32:24.859658 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-01-10 14:32:24.859666 | orchestrator | 2026-01-10 14:32:24.859674 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-01-10 14:32:24.859681 | orchestrator | Saturday 10 January 2026 14:31:26 +0000 (0:00:01.108) 0:00:02.322 ****** 2026-01-10 14:32:24.859689 | orchestrator | changed: [testbed-manager] 2026-01-10 14:32:24.859697 | orchestrator | 2026-01-10 14:32:24.859704 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-01-10 14:32:24.859712 | orchestrator | Saturday 10 January 2026 14:31:27 +0000 (0:00:01.342) 0:00:03.665 ****** 2026-01-10 14:32:24.859720 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-01-10 14:32:24.859728 | orchestrator | ok: [testbed-manager] 2026-01-10 14:32:24.859736 | orchestrator | 2026-01-10 14:32:24.859744 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-01-10 14:32:24.859774 | orchestrator | Saturday 10 January 2026 14:32:18 +0000 (0:00:50.618) 0:00:54.283 ****** 2026-01-10 14:32:24.859781 | orchestrator | changed: [testbed-manager] 2026-01-10 14:32:24.859788 | orchestrator | 2026-01-10 14:32:24.859796 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:32:24.859804 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:32:24.859811 | orchestrator | 2026-01-10 14:32:24.859819 | orchestrator | 2026-01-10 14:32:24.859827 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:32:24.859842 | orchestrator | Saturday 10 January 2026 14:32:22 +0000 (0:00:04.260) 0:00:58.544 ****** 2026-01-10 14:32:24.859851 | orchestrator | =============================================================================== 2026-01-10 14:32:24.859856 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 50.62s 2026-01-10 14:32:24.859861 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 4.26s 2026-01-10 14:32:24.859865 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.34s 2026-01-10 14:32:24.859870 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 1.11s 2026-01-10 14:32:24.859874 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.96s 2026-01-10 14:32:24.860627 | orchestrator | 2026-01-10 14:32:24 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:32:24.862455 | orchestrator | 2026-01-10 14:32:24 | INFO  | Task 497629a9-58db-437d-96c8-eeaba0322bae is in state STARTED 2026-01-10 14:32:24.864187 | orchestrator | 2026-01-10 14:32:24 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:32:24.864218 | orchestrator | 2026-01-10 14:32:24 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:27.902257 | orchestrator | 2026-01-10 14:32:27 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:32:27.903980 | orchestrator | 2026-01-10 14:32:27 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:32:27.905030 | orchestrator | 2026-01-10 14:32:27 | INFO  | Task 497629a9-58db-437d-96c8-eeaba0322bae is in state STARTED 2026-01-10 14:32:27.906296 | orchestrator | 2026-01-10 14:32:27 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:32:27.906346 | orchestrator | 2026-01-10 14:32:27 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:30.947168 | orchestrator | 2026-01-10 14:32:30 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:32:30.947995 | orchestrator | 2026-01-10 14:32:30 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:32:30.949841 | orchestrator | 2026-01-10 14:32:30 | INFO  | Task 497629a9-58db-437d-96c8-eeaba0322bae is in state SUCCESS 2026-01-10 14:32:30.951561 | orchestrator | 2026-01-10 14:32:30.951614 | orchestrator | 2026-01-10 14:32:30.951621 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:32:30.951625 | orchestrator | 2026-01-10 14:32:30.951630 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:32:30.951634 | orchestrator | Saturday 10 January 2026 14:31:04 +0000 (0:00:00.404) 0:00:00.404 ****** 2026-01-10 14:32:30.951639 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-01-10 14:32:30.951644 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-01-10 14:32:30.951650 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-01-10 14:32:30.951656 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-01-10 14:32:30.951662 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-01-10 14:32:30.951668 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-01-10 14:32:30.951674 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-01-10 14:32:30.951680 | orchestrator | 2026-01-10 14:32:30.951685 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-01-10 14:32:30.951691 | orchestrator | 2026-01-10 14:32:30.951697 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-01-10 14:32:30.951703 | orchestrator | Saturday 10 January 2026 14:31:06 +0000 (0:00:01.345) 0:00:01.750 ****** 2026-01-10 14:32:30.951724 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-2, testbed-node-1, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:32:30.951733 | orchestrator | 2026-01-10 14:32:30.951739 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-01-10 14:32:30.951745 | orchestrator | Saturday 10 January 2026 14:31:08 +0000 (0:00:02.167) 0:00:03.918 ****** 2026-01-10 14:32:30.951751 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:32:30.951834 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:32:30.951839 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:32:30.951843 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:32:30.951847 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:32:30.951851 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:32:30.951856 | orchestrator | ok: [testbed-manager] 2026-01-10 14:32:30.951860 | orchestrator | 2026-01-10 14:32:30.951864 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-01-10 14:32:30.951868 | orchestrator | Saturday 10 January 2026 14:31:10 +0000 (0:00:02.050) 0:00:05.968 ****** 2026-01-10 14:32:30.951872 | orchestrator | ok: [testbed-manager] 2026-01-10 14:32:30.951876 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:32:30.951879 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:32:30.951883 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:32:30.951887 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:32:30.951891 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:32:30.951894 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:32:30.951898 | orchestrator | 2026-01-10 14:32:30.951902 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-01-10 14:32:30.951906 | orchestrator | Saturday 10 January 2026 14:31:13 +0000 (0:00:03.152) 0:00:09.121 ****** 2026-01-10 14:32:30.951910 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:32:30.951914 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:32:30.951917 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:32:30.951921 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:32:30.951925 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:32:30.951929 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:32:30.951932 | orchestrator | changed: [testbed-manager] 2026-01-10 14:32:30.951953 | orchestrator | 2026-01-10 14:32:30.951957 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-01-10 14:32:30.951961 | orchestrator | Saturday 10 January 2026 14:31:15 +0000 (0:00:02.164) 0:00:11.286 ****** 2026-01-10 14:32:30.951965 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:32:30.951968 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:32:30.951972 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:32:30.951976 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:32:30.952012 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:32:30.952016 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:32:30.952020 | orchestrator | changed: [testbed-manager] 2026-01-10 14:32:30.952023 | orchestrator | 2026-01-10 14:32:30.952028 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-01-10 14:32:30.952031 | orchestrator | Saturday 10 January 2026 14:31:29 +0000 (0:00:13.323) 0:00:24.609 ****** 2026-01-10 14:32:30.952035 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:32:30.952039 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:32:30.952043 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:32:30.952047 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:32:30.952050 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:32:30.952054 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:32:30.952058 | orchestrator | changed: [testbed-manager] 2026-01-10 14:32:30.952061 | orchestrator | 2026-01-10 14:32:30.952065 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-01-10 14:32:30.952069 | orchestrator | Saturday 10 January 2026 14:32:05 +0000 (0:00:36.632) 0:01:01.242 ****** 2026-01-10 14:32:30.952074 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:32:30.952079 | orchestrator | 2026-01-10 14:32:30.952083 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-01-10 14:32:30.952087 | orchestrator | Saturday 10 January 2026 14:32:07 +0000 (0:00:01.396) 0:01:02.638 ****** 2026-01-10 14:32:30.952091 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-01-10 14:32:30.952096 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-01-10 14:32:30.952100 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-01-10 14:32:30.952104 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-01-10 14:32:30.952120 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-01-10 14:32:30.952124 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-01-10 14:32:30.952128 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-01-10 14:32:30.952142 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-01-10 14:32:30.952149 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-01-10 14:32:30.952155 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-01-10 14:32:30.952167 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-01-10 14:32:30.952177 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-01-10 14:32:30.952182 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-01-10 14:32:30.952189 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-01-10 14:32:30.952195 | orchestrator | 2026-01-10 14:32:30.952202 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-01-10 14:32:30.952209 | orchestrator | Saturday 10 January 2026 14:32:13 +0000 (0:00:06.542) 0:01:09.180 ****** 2026-01-10 14:32:30.952215 | orchestrator | ok: [testbed-manager] 2026-01-10 14:32:30.952220 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:32:30.952227 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:32:30.952233 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:32:30.952239 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:32:30.952245 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:32:30.952252 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:32:30.952266 | orchestrator | 2026-01-10 14:32:30.952273 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-01-10 14:32:30.952278 | orchestrator | Saturday 10 January 2026 14:32:14 +0000 (0:00:01.133) 0:01:10.314 ****** 2026-01-10 14:32:30.952282 | orchestrator | changed: [testbed-manager] 2026-01-10 14:32:30.952286 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:32:30.952290 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:32:30.952295 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:32:30.952299 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:32:30.952304 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:32:30.952308 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:32:30.952312 | orchestrator | 2026-01-10 14:32:30.952316 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-01-10 14:32:30.952321 | orchestrator | Saturday 10 January 2026 14:32:16 +0000 (0:00:01.512) 0:01:11.826 ****** 2026-01-10 14:32:30.952325 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:32:30.952329 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:32:30.952333 | orchestrator | ok: [testbed-manager] 2026-01-10 14:32:30.952338 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:32:30.952342 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:32:30.952346 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:32:30.952350 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:32:30.952354 | orchestrator | 2026-01-10 14:32:30.952359 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-01-10 14:32:30.952363 | orchestrator | Saturday 10 January 2026 14:32:18 +0000 (0:00:02.355) 0:01:14.182 ****** 2026-01-10 14:32:30.952367 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:32:30.952371 | orchestrator | ok: [testbed-manager] 2026-01-10 14:32:30.952375 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:32:30.952380 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:32:30.952384 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:32:30.952388 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:32:30.952392 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:32:30.952397 | orchestrator | 2026-01-10 14:32:30.952400 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-01-10 14:32:30.952404 | orchestrator | Saturday 10 January 2026 14:32:20 +0000 (0:00:02.231) 0:01:16.413 ****** 2026-01-10 14:32:30.952408 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-01-10 14:32:30.952414 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:32:30.952418 | orchestrator | 2026-01-10 14:32:30.952422 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-01-10 14:32:30.952426 | orchestrator | Saturday 10 January 2026 14:32:22 +0000 (0:00:01.560) 0:01:17.973 ****** 2026-01-10 14:32:30.952430 | orchestrator | changed: [testbed-manager] 2026-01-10 14:32:30.952433 | orchestrator | 2026-01-10 14:32:30.952437 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-01-10 14:32:30.952441 | orchestrator | Saturday 10 January 2026 14:32:24 +0000 (0:00:02.260) 0:01:20.234 ****** 2026-01-10 14:32:30.952445 | orchestrator | changed: [testbed-manager] 2026-01-10 14:32:30.952448 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:32:30.952452 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:32:30.952456 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:32:30.952460 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:32:30.952463 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:32:30.952467 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:32:30.952471 | orchestrator | 2026-01-10 14:32:30.952475 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:32:30.952478 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:32:30.952489 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:32:30.952493 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:32:30.952496 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:32:30.952505 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:32:30.952509 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:32:30.952517 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:32:30.952605 | orchestrator | 2026-01-10 14:32:30.952611 | orchestrator | 2026-01-10 14:32:30.952615 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:32:30.952619 | orchestrator | Saturday 10 January 2026 14:32:27 +0000 (0:00:03.090) 0:01:23.325 ****** 2026-01-10 14:32:30.952623 | orchestrator | =============================================================================== 2026-01-10 14:32:30.952626 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 36.63s 2026-01-10 14:32:30.952630 | orchestrator | osism.services.netdata : Add repository -------------------------------- 13.32s 2026-01-10 14:32:30.952634 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 6.54s 2026-01-10 14:32:30.952638 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.15s 2026-01-10 14:32:30.952641 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.09s 2026-01-10 14:32:30.952645 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.36s 2026-01-10 14:32:30.952649 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.26s 2026-01-10 14:32:30.952652 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.23s 2026-01-10 14:32:30.952656 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.17s 2026-01-10 14:32:30.952660 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.16s 2026-01-10 14:32:30.952664 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.05s 2026-01-10 14:32:30.952667 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.56s 2026-01-10 14:32:30.952671 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.51s 2026-01-10 14:32:30.952675 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.40s 2026-01-10 14:32:30.952679 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.35s 2026-01-10 14:32:30.952683 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.13s 2026-01-10 14:32:30.952689 | orchestrator | 2026-01-10 14:32:30 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:32:30.953463 | orchestrator | 2026-01-10 14:32:30 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:33.988206 | orchestrator | 2026-01-10 14:32:33 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:32:33.988397 | orchestrator | 2026-01-10 14:32:33 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:32:33.989566 | orchestrator | 2026-01-10 14:32:33 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:32:33.989618 | orchestrator | 2026-01-10 14:32:33 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:37.032664 | orchestrator | 2026-01-10 14:32:37 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:32:37.034598 | orchestrator | 2026-01-10 14:32:37 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:32:37.036260 | orchestrator | 2026-01-10 14:32:37 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:32:37.036319 | orchestrator | 2026-01-10 14:32:37 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:40.113502 | orchestrator | 2026-01-10 14:32:40 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:32:40.114099 | orchestrator | 2026-01-10 14:32:40 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:32:40.115135 | orchestrator | 2026-01-10 14:32:40 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:32:40.115170 | orchestrator | 2026-01-10 14:32:40 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:43.160361 | orchestrator | 2026-01-10 14:32:43 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:32:43.162194 | orchestrator | 2026-01-10 14:32:43 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:32:43.164311 | orchestrator | 2026-01-10 14:32:43 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:32:43.164371 | orchestrator | 2026-01-10 14:32:43 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:46.214480 | orchestrator | 2026-01-10 14:32:46 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:32:46.220648 | orchestrator | 2026-01-10 14:32:46 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:32:46.223281 | orchestrator | 2026-01-10 14:32:46 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:32:46.223336 | orchestrator | 2026-01-10 14:32:46 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:49.278376 | orchestrator | 2026-01-10 14:32:49 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:32:49.281532 | orchestrator | 2026-01-10 14:32:49 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:32:49.282450 | orchestrator | 2026-01-10 14:32:49 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:32:49.283415 | orchestrator | 2026-01-10 14:32:49 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:52.339157 | orchestrator | 2026-01-10 14:32:52 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:32:52.340714 | orchestrator | 2026-01-10 14:32:52 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:32:52.342388 | orchestrator | 2026-01-10 14:32:52 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:32:52.342448 | orchestrator | 2026-01-10 14:32:52 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:55.386339 | orchestrator | 2026-01-10 14:32:55 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:32:55.390047 | orchestrator | 2026-01-10 14:32:55 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:32:55.392632 | orchestrator | 2026-01-10 14:32:55 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:32:55.393444 | orchestrator | 2026-01-10 14:32:55 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:58.426742 | orchestrator | 2026-01-10 14:32:58 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:32:58.428819 | orchestrator | 2026-01-10 14:32:58 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:32:58.429407 | orchestrator | 2026-01-10 14:32:58 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:32:58.430160 | orchestrator | 2026-01-10 14:32:58 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:01.485169 | orchestrator | 2026-01-10 14:33:01 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:33:01.486863 | orchestrator | 2026-01-10 14:33:01 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:33:01.488317 | orchestrator | 2026-01-10 14:33:01 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:33:01.488393 | orchestrator | 2026-01-10 14:33:01 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:04.522683 | orchestrator | 2026-01-10 14:33:04 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:33:04.525203 | orchestrator | 2026-01-10 14:33:04 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:33:04.526760 | orchestrator | 2026-01-10 14:33:04 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:33:04.526825 | orchestrator | 2026-01-10 14:33:04 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:07.575794 | orchestrator | 2026-01-10 14:33:07 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:33:07.578185 | orchestrator | 2026-01-10 14:33:07 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:33:07.580077 | orchestrator | 2026-01-10 14:33:07 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:33:07.580142 | orchestrator | 2026-01-10 14:33:07 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:10.619134 | orchestrator | 2026-01-10 14:33:10 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:33:10.620031 | orchestrator | 2026-01-10 14:33:10 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:33:10.621512 | orchestrator | 2026-01-10 14:33:10 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:33:10.621553 | orchestrator | 2026-01-10 14:33:10 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:13.660371 | orchestrator | 2026-01-10 14:33:13 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:33:13.661202 | orchestrator | 2026-01-10 14:33:13 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:33:13.662316 | orchestrator | 2026-01-10 14:33:13 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:33:13.662349 | orchestrator | 2026-01-10 14:33:13 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:16.690721 | orchestrator | 2026-01-10 14:33:16 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:33:16.692364 | orchestrator | 2026-01-10 14:33:16 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:33:16.693287 | orchestrator | 2026-01-10 14:33:16 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:33:16.693312 | orchestrator | 2026-01-10 14:33:16 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:19.740479 | orchestrator | 2026-01-10 14:33:19 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:33:19.743617 | orchestrator | 2026-01-10 14:33:19 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:33:19.745249 | orchestrator | 2026-01-10 14:33:19 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:33:19.745304 | orchestrator | 2026-01-10 14:33:19 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:22.783557 | orchestrator | 2026-01-10 14:33:22 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:33:22.784580 | orchestrator | 2026-01-10 14:33:22 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:33:22.786304 | orchestrator | 2026-01-10 14:33:22 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:33:22.786338 | orchestrator | 2026-01-10 14:33:22 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:25.826204 | orchestrator | 2026-01-10 14:33:25 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state STARTED 2026-01-10 14:33:25.827103 | orchestrator | 2026-01-10 14:33:25 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:33:25.828116 | orchestrator | 2026-01-10 14:33:25 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:33:25.828166 | orchestrator | 2026-01-10 14:33:25 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:28.886726 | orchestrator | 2026-01-10 14:33:28 | INFO  | Task b5876ff7-8807-4469-8c8a-cbc58203025e is in state SUCCESS 2026-01-10 14:33:28.889240 | orchestrator | 2026-01-10 14:33:28.889313 | orchestrator | 2026-01-10 14:33:28.889337 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-01-10 14:33:28.889359 | orchestrator | 2026-01-10 14:33:28.889378 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-10 14:33:28.889397 | orchestrator | Saturday 10 January 2026 14:30:58 +0000 (0:00:00.262) 0:00:00.262 ****** 2026-01-10 14:33:28.889417 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:33:28.889440 | orchestrator | 2026-01-10 14:33:28.889452 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-01-10 14:33:28.889463 | orchestrator | Saturday 10 January 2026 14:30:59 +0000 (0:00:01.240) 0:00:01.503 ****** 2026-01-10 14:33:28.889474 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-10 14:33:28.889485 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-10 14:33:28.889496 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-10 14:33:28.889506 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-10 14:33:28.889517 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-10 14:33:28.889528 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-10 14:33:28.889539 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-10 14:33:28.889550 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-10 14:33:28.889560 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-10 14:33:28.889571 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-10 14:33:28.889584 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-10 14:33:28.889595 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-10 14:33:28.889605 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-10 14:33:28.889616 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-10 14:33:28.889648 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-10 14:33:28.889659 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-10 14:33:28.889671 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-10 14:33:28.889690 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-10 14:33:28.889701 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-10 14:33:28.889712 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-10 14:33:28.889722 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-10 14:33:28.889733 | orchestrator | 2026-01-10 14:33:28.889744 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-10 14:33:28.889755 | orchestrator | Saturday 10 January 2026 14:31:04 +0000 (0:00:04.552) 0:00:06.056 ****** 2026-01-10 14:33:28.889766 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:33:28.889777 | orchestrator | 2026-01-10 14:33:28.889789 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-01-10 14:33:28.889801 | orchestrator | Saturday 10 January 2026 14:31:05 +0000 (0:00:01.135) 0:00:07.191 ****** 2026-01-10 14:33:28.889818 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:33:28.889865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:33:28.889926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:33:28.889940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:33:28.889952 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:33:28.889978 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:33:28.890010 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:33:28.890136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.890155 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.890223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.890247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.890281 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.890300 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.890335 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.890369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.890390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.890460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.890482 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.890500 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.890533 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.890553 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.890573 | orchestrator | 2026-01-10 14:33:28.890593 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-01-10 14:33:28.890612 | orchestrator | Saturday 10 January 2026 14:31:10 +0000 (0:00:04.865) 0:00:12.056 ****** 2026-01-10 14:33:28.890636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:33:28.890649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:33:28.890660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:33:28.890672 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:33:28.890718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:33:28.890732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:33:28.890752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:33:28.890764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:33:28.890780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:33:28.890792 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:33:28.890803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:33:28.890814 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:33:28.890864 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:33:28.890885 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:33:28.890904 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:33:28.890923 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:33:28.890942 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:33:28.890961 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:33:28.890981 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:33:28.890997 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:33:28.891023 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:33:28.891045 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:33:28.891065 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:33:28.891083 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:33:28.891121 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:33:28.891154 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:33:28.891174 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:33:28.891195 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:33:28.891209 | orchestrator | 2026-01-10 14:33:28.891220 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-01-10 14:33:28.891231 | orchestrator | Saturday 10 January 2026 14:31:12 +0000 (0:00:01.918) 0:00:13.975 ****** 2026-01-10 14:33:28.891243 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:33:28.891260 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:33:28.891272 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:33:28.891283 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:33:28.891294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:33:28.891312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:33:28.891331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:33:28.891342 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:33:28.891353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:33:28.891365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:33:28.891380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:33:28.891392 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:33:28.891403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:33:28.891414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:33:28.891433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:33:28.891444 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:33:28.891461 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:33:28.891474 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:33:28.891485 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:33:28.891496 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:33:28.891521 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:33:28.891540 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:33:28.891558 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:33:28.891576 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:33:28.891594 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:33:28.891631 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:33:28.891648 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:33:28.891663 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:33:28.891681 | orchestrator | 2026-01-10 14:33:28.891699 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-01-10 14:33:28.891719 | orchestrator | Saturday 10 January 2026 14:31:14 +0000 (0:00:02.403) 0:00:16.379 ****** 2026-01-10 14:33:28.891738 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:33:28.891755 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:33:28.891772 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:33:28.891791 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:33:28.891808 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:33:28.891847 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:33:28.891868 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:33:28.891886 | orchestrator | 2026-01-10 14:33:28.891905 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-01-10 14:33:28.891924 | orchestrator | Saturday 10 January 2026 14:31:16 +0000 (0:00:01.727) 0:00:18.106 ****** 2026-01-10 14:33:28.891942 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:33:28.891959 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:33:28.891970 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:33:28.891981 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:33:28.891992 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:33:28.892002 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:33:28.892012 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:33:28.892023 | orchestrator | 2026-01-10 14:33:28.892034 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-01-10 14:33:28.892045 | orchestrator | Saturday 10 January 2026 14:31:18 +0000 (0:00:01.988) 0:00:20.095 ****** 2026-01-10 14:33:28.892057 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:33:28.892069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:33:28.892090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:33:28.892101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:33:28.892120 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:33:28.892142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.892154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.892165 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:33:28.892181 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.892199 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:33:28.892214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.892242 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.892272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.892293 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.892311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.892337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.892368 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.892389 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.892410 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.892450 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.892463 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.892474 | orchestrator | 2026-01-10 14:33:28.892485 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-01-10 14:33:28.892496 | orchestrator | Saturday 10 January 2026 14:31:25 +0000 (0:00:06.592) 0:00:26.687 ****** 2026-01-10 14:33:28.892507 | orchestrator | [WARNING]: Skipped 2026-01-10 14:33:28.892526 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-01-10 14:33:28.892548 | orchestrator | to this access issue: 2026-01-10 14:33:28.892574 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-01-10 14:33:28.892592 | orchestrator | directory 2026-01-10 14:33:28.892611 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-10 14:33:28.892629 | orchestrator | 2026-01-10 14:33:28.892648 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-01-10 14:33:28.892667 | orchestrator | Saturday 10 January 2026 14:31:26 +0000 (0:00:01.698) 0:00:28.385 ****** 2026-01-10 14:33:28.892685 | orchestrator | [WARNING]: Skipped 2026-01-10 14:33:28.892704 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-01-10 14:33:28.892715 | orchestrator | to this access issue: 2026-01-10 14:33:28.892736 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-01-10 14:33:28.892747 | orchestrator | directory 2026-01-10 14:33:28.892758 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-10 14:33:28.892768 | orchestrator | 2026-01-10 14:33:28.892779 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-01-10 14:33:28.892790 | orchestrator | Saturday 10 January 2026 14:31:27 +0000 (0:00:01.023) 0:00:29.409 ****** 2026-01-10 14:33:28.892800 | orchestrator | [WARNING]: Skipped 2026-01-10 14:33:28.892811 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-01-10 14:33:28.892859 | orchestrator | to this access issue: 2026-01-10 14:33:28.892878 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-01-10 14:33:28.892889 | orchestrator | directory 2026-01-10 14:33:28.892900 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-10 14:33:28.892911 | orchestrator | 2026-01-10 14:33:28.892921 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-01-10 14:33:28.892932 | orchestrator | Saturday 10 January 2026 14:31:28 +0000 (0:00:00.975) 0:00:30.384 ****** 2026-01-10 14:33:28.892943 | orchestrator | [WARNING]: Skipped 2026-01-10 14:33:28.892960 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-01-10 14:33:28.892971 | orchestrator | to this access issue: 2026-01-10 14:33:28.892982 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-01-10 14:33:28.892992 | orchestrator | directory 2026-01-10 14:33:28.893003 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-10 14:33:28.893014 | orchestrator | 2026-01-10 14:33:28.893024 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-01-10 14:33:28.893035 | orchestrator | Saturday 10 January 2026 14:31:29 +0000 (0:00:00.995) 0:00:31.380 ****** 2026-01-10 14:33:28.893045 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:33:28.893056 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:33:28.893067 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:33:28.893077 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:33:28.893088 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:33:28.893099 | orchestrator | changed: [testbed-manager] 2026-01-10 14:33:28.893109 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:33:28.893119 | orchestrator | 2026-01-10 14:33:28.893130 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-01-10 14:33:28.893141 | orchestrator | Saturday 10 January 2026 14:31:34 +0000 (0:00:04.801) 0:00:36.181 ****** 2026-01-10 14:33:28.893151 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-10 14:33:28.893162 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-10 14:33:28.893173 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-10 14:33:28.893184 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-10 14:33:28.893195 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-10 14:33:28.893205 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-10 14:33:28.893216 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-10 14:33:28.893227 | orchestrator | 2026-01-10 14:33:28.893237 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-01-10 14:33:28.893248 | orchestrator | Saturday 10 January 2026 14:31:39 +0000 (0:00:04.454) 0:00:40.635 ****** 2026-01-10 14:33:28.893259 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:33:28.893270 | orchestrator | changed: [testbed-manager] 2026-01-10 14:33:28.893281 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:33:28.893298 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:33:28.893318 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:33:28.893329 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:33:28.893340 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:33:28.893350 | orchestrator | 2026-01-10 14:33:28.893361 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-01-10 14:33:28.893372 | orchestrator | Saturday 10 January 2026 14:31:42 +0000 (0:00:03.731) 0:00:44.367 ****** 2026-01-10 14:33:28.893384 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:33:28.893396 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:33:28.893408 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:33:28.893423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:33:28.893435 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:33:28.893446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:33:28.893476 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.893519 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.893540 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.893557 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:33:28.893575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:33:28.893600 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:33:28.893620 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:33:28.893638 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.893683 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.893704 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:33:28.893723 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:33:28.893742 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:33:28.893769 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:33:28.893789 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.893809 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.893895 | orchestrator | 2026-01-10 14:33:28.893908 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-01-10 14:33:28.893920 | orchestrator | Saturday 10 January 2026 14:31:46 +0000 (0:00:03.385) 0:00:47.752 ****** 2026-01-10 14:33:28.893930 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-10 14:33:28.893941 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-10 14:33:28.893952 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-10 14:33:28.893963 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-10 14:33:28.893974 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-10 14:33:28.893993 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-10 14:33:28.894011 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-10 14:33:28.894080 | orchestrator | 2026-01-10 14:33:28.894114 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-01-10 14:33:28.894144 | orchestrator | Saturday 10 January 2026 14:31:49 +0000 (0:00:03.662) 0:00:51.414 ****** 2026-01-10 14:33:28.894163 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-10 14:33:28.894183 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-10 14:33:28.894200 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-10 14:33:28.894218 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-10 14:33:28.894235 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-10 14:33:28.894252 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-10 14:33:28.894268 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-10 14:33:28.894285 | orchestrator | 2026-01-10 14:33:28.894302 | orchestrator | TASK [common : Check common containers] **************************************** 2026-01-10 14:33:28.894319 | orchestrator | Saturday 10 January 2026 14:31:53 +0000 (0:00:03.367) 0:00:54.782 ****** 2026-01-10 14:33:28.894338 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:33:28.894357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:33:28.894384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:33:28.894415 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.894432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:33:28.894453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.894463 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:33:28.894473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.894483 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:33:28.894494 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.894511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.894521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.894531 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:33:28.894546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.894557 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.894567 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.894577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.894592 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.894609 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.894625 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.894642 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:33:28.894658 | orchestrator | 2026-01-10 14:33:28.894675 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-01-10 14:33:28.894692 | orchestrator | Saturday 10 January 2026 14:31:57 +0000 (0:00:04.371) 0:00:59.153 ****** 2026-01-10 14:33:28.894716 | orchestrator | changed: [testbed-manager] 2026-01-10 14:33:28.894734 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:33:28.894748 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:33:28.894758 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:33:28.894768 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:33:28.894777 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:33:28.894787 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:33:28.894796 | orchestrator | 2026-01-10 14:33:28.894806 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-01-10 14:33:28.894816 | orchestrator | Saturday 10 January 2026 14:31:59 +0000 (0:00:01.820) 0:01:00.973 ****** 2026-01-10 14:33:28.894846 | orchestrator | changed: [testbed-manager] 2026-01-10 14:33:28.894856 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:33:28.894866 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:33:28.894875 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:33:28.894904 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:33:28.894914 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:33:28.894923 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:33:28.894933 | orchestrator | 2026-01-10 14:33:28.894942 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-10 14:33:28.894962 | orchestrator | Saturday 10 January 2026 14:32:00 +0000 (0:00:01.434) 0:01:02.408 ****** 2026-01-10 14:33:28.894984 | orchestrator | 2026-01-10 14:33:28.894994 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-10 14:33:28.895003 | orchestrator | Saturday 10 January 2026 14:32:00 +0000 (0:00:00.092) 0:01:02.500 ****** 2026-01-10 14:33:28.895013 | orchestrator | 2026-01-10 14:33:28.895022 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-10 14:33:28.895041 | orchestrator | Saturday 10 January 2026 14:32:01 +0000 (0:00:00.080) 0:01:02.581 ****** 2026-01-10 14:33:28.895068 | orchestrator | 2026-01-10 14:33:28.895078 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-10 14:33:28.895088 | orchestrator | Saturday 10 January 2026 14:32:01 +0000 (0:00:00.243) 0:01:02.825 ****** 2026-01-10 14:33:28.895097 | orchestrator | 2026-01-10 14:33:28.895115 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-10 14:33:28.895125 | orchestrator | Saturday 10 January 2026 14:32:01 +0000 (0:00:00.068) 0:01:02.893 ****** 2026-01-10 14:33:28.895135 | orchestrator | 2026-01-10 14:33:28.895144 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-10 14:33:28.895153 | orchestrator | Saturday 10 January 2026 14:32:01 +0000 (0:00:00.065) 0:01:02.958 ****** 2026-01-10 14:33:28.895163 | orchestrator | 2026-01-10 14:33:28.895173 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-10 14:33:28.895182 | orchestrator | Saturday 10 January 2026 14:32:01 +0000 (0:00:00.066) 0:01:03.025 ****** 2026-01-10 14:33:28.895191 | orchestrator | 2026-01-10 14:33:28.895201 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-01-10 14:33:28.895210 | orchestrator | Saturday 10 January 2026 14:32:01 +0000 (0:00:00.089) 0:01:03.114 ****** 2026-01-10 14:33:28.895220 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:33:28.895229 | orchestrator | changed: [testbed-manager] 2026-01-10 14:33:28.895238 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:33:28.895248 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:33:28.895257 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:33:28.895272 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:33:28.895281 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:33:28.895291 | orchestrator | 2026-01-10 14:33:28.895301 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-01-10 14:33:28.895310 | orchestrator | Saturday 10 January 2026 14:32:37 +0000 (0:00:35.851) 0:01:38.965 ****** 2026-01-10 14:33:28.895320 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:33:28.895329 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:33:28.895338 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:33:28.895348 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:33:28.895366 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:33:28.895376 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:33:28.895386 | orchestrator | changed: [testbed-manager] 2026-01-10 14:33:28.895395 | orchestrator | 2026-01-10 14:33:28.895405 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-01-10 14:33:28.895415 | orchestrator | Saturday 10 January 2026 14:33:15 +0000 (0:00:38.546) 0:02:17.512 ****** 2026-01-10 14:33:28.895424 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:33:28.895434 | orchestrator | ok: [testbed-manager] 2026-01-10 14:33:28.895444 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:33:28.895453 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:33:28.895463 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:33:28.895472 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:33:28.895482 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:33:28.895491 | orchestrator | 2026-01-10 14:33:28.895501 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-01-10 14:33:28.895510 | orchestrator | Saturday 10 January 2026 14:33:17 +0000 (0:00:01.899) 0:02:19.411 ****** 2026-01-10 14:33:28.895520 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:33:28.895530 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:33:28.895539 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:33:28.895548 | orchestrator | changed: [testbed-manager] 2026-01-10 14:33:28.895558 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:33:28.895567 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:33:28.895722 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:33:28.895736 | orchestrator | 2026-01-10 14:33:28.895746 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:33:28.895757 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-10 14:33:28.895775 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-10 14:33:28.895797 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-10 14:33:28.895815 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-10 14:33:28.895839 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-10 14:33:28.895849 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-10 14:33:28.895863 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-10 14:33:28.895885 | orchestrator | 2026-01-10 14:33:28.895906 | orchestrator | 2026-01-10 14:33:28.895921 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:33:28.895936 | orchestrator | Saturday 10 January 2026 14:33:27 +0000 (0:00:09.905) 0:02:29.317 ****** 2026-01-10 14:33:28.895951 | orchestrator | =============================================================================== 2026-01-10 14:33:28.895968 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 38.55s 2026-01-10 14:33:28.895985 | orchestrator | common : Restart fluentd container ------------------------------------- 35.85s 2026-01-10 14:33:28.896001 | orchestrator | common : Restart cron container ----------------------------------------- 9.91s 2026-01-10 14:33:28.896036 | orchestrator | common : Copying over config.json files for services -------------------- 6.59s 2026-01-10 14:33:28.896050 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.87s 2026-01-10 14:33:28.896059 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.80s 2026-01-10 14:33:28.896069 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.55s 2026-01-10 14:33:28.896078 | orchestrator | common : Copying over cron logrotate config file ------------------------ 4.45s 2026-01-10 14:33:28.896088 | orchestrator | common : Check common containers ---------------------------------------- 4.37s 2026-01-10 14:33:28.896097 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.73s 2026-01-10 14:33:28.896106 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.66s 2026-01-10 14:33:28.896116 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.39s 2026-01-10 14:33:28.896125 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.37s 2026-01-10 14:33:28.896135 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.40s 2026-01-10 14:33:28.896144 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.99s 2026-01-10 14:33:28.896154 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.92s 2026-01-10 14:33:28.896170 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.90s 2026-01-10 14:33:28.896180 | orchestrator | common : Creating log volume -------------------------------------------- 1.82s 2026-01-10 14:33:28.896189 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.73s 2026-01-10 14:33:28.896199 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.70s 2026-01-10 14:33:28.896208 | orchestrator | 2026-01-10 14:33:28 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:33:28.896450 | orchestrator | 2026-01-10 14:33:28 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:33:28.896481 | orchestrator | 2026-01-10 14:33:28 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:31.977846 | orchestrator | 2026-01-10 14:33:31 | INFO  | Task e39f9cf5-d7e2-4973-9d51-a2b9770afa9b is in state STARTED 2026-01-10 14:33:31.979347 | orchestrator | 2026-01-10 14:33:31 | INFO  | Task 99d53ac9-324f-4868-bf5c-6c4473efd3fb is in state STARTED 2026-01-10 14:33:31.984655 | orchestrator | 2026-01-10 14:33:31 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:33:31.985350 | orchestrator | 2026-01-10 14:33:31 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:33:31.988736 | orchestrator | 2026-01-10 14:33:31 | INFO  | Task 626c1919-04db-466c-986d-9b1fd35dcaf1 is in state STARTED 2026-01-10 14:33:31.989529 | orchestrator | 2026-01-10 14:33:31 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:33:31.989584 | orchestrator | 2026-01-10 14:33:31 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:35.029262 | orchestrator | 2026-01-10 14:33:35 | INFO  | Task e39f9cf5-d7e2-4973-9d51-a2b9770afa9b is in state STARTED 2026-01-10 14:33:35.029356 | orchestrator | 2026-01-10 14:33:35 | INFO  | Task 99d53ac9-324f-4868-bf5c-6c4473efd3fb is in state STARTED 2026-01-10 14:33:35.029368 | orchestrator | 2026-01-10 14:33:35 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:33:35.029376 | orchestrator | 2026-01-10 14:33:35 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:33:35.029395 | orchestrator | 2026-01-10 14:33:35 | INFO  | Task 626c1919-04db-466c-986d-9b1fd35dcaf1 is in state STARTED 2026-01-10 14:33:35.030215 | orchestrator | 2026-01-10 14:33:35 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:33:35.030280 | orchestrator | 2026-01-10 14:33:35 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:38.059943 | orchestrator | 2026-01-10 14:33:38 | INFO  | Task e39f9cf5-d7e2-4973-9d51-a2b9770afa9b is in state STARTED 2026-01-10 14:33:38.062206 | orchestrator | 2026-01-10 14:33:38 | INFO  | Task 99d53ac9-324f-4868-bf5c-6c4473efd3fb is in state STARTED 2026-01-10 14:33:38.062725 | orchestrator | 2026-01-10 14:33:38 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:33:38.063520 | orchestrator | 2026-01-10 14:33:38 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:33:38.077524 | orchestrator | 2026-01-10 14:33:38 | INFO  | Task 626c1919-04db-466c-986d-9b1fd35dcaf1 is in state STARTED 2026-01-10 14:33:38.077609 | orchestrator | 2026-01-10 14:33:38 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:33:38.077621 | orchestrator | 2026-01-10 14:33:38 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:41.116431 | orchestrator | 2026-01-10 14:33:41 | INFO  | Task e39f9cf5-d7e2-4973-9d51-a2b9770afa9b is in state STARTED 2026-01-10 14:33:41.116891 | orchestrator | 2026-01-10 14:33:41 | INFO  | Task 99d53ac9-324f-4868-bf5c-6c4473efd3fb is in state STARTED 2026-01-10 14:33:41.117775 | orchestrator | 2026-01-10 14:33:41 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:33:41.118741 | orchestrator | 2026-01-10 14:33:41 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:33:41.120042 | orchestrator | 2026-01-10 14:33:41 | INFO  | Task 626c1919-04db-466c-986d-9b1fd35dcaf1 is in state STARTED 2026-01-10 14:33:41.120721 | orchestrator | 2026-01-10 14:33:41 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:33:41.120776 | orchestrator | 2026-01-10 14:33:41 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:44.173979 | orchestrator | 2026-01-10 14:33:44 | INFO  | Task e39f9cf5-d7e2-4973-9d51-a2b9770afa9b is in state STARTED 2026-01-10 14:33:44.174376 | orchestrator | 2026-01-10 14:33:44 | INFO  | Task 99d53ac9-324f-4868-bf5c-6c4473efd3fb is in state STARTED 2026-01-10 14:33:44.175899 | orchestrator | 2026-01-10 14:33:44 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:33:44.177228 | orchestrator | 2026-01-10 14:33:44 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:33:44.178655 | orchestrator | 2026-01-10 14:33:44 | INFO  | Task 626c1919-04db-466c-986d-9b1fd35dcaf1 is in state STARTED 2026-01-10 14:33:44.180685 | orchestrator | 2026-01-10 14:33:44 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:33:44.181031 | orchestrator | 2026-01-10 14:33:44 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:47.218549 | orchestrator | 2026-01-10 14:33:47 | INFO  | Task e39f9cf5-d7e2-4973-9d51-a2b9770afa9b is in state STARTED 2026-01-10 14:33:47.219860 | orchestrator | 2026-01-10 14:33:47 | INFO  | Task 99d53ac9-324f-4868-bf5c-6c4473efd3fb is in state STARTED 2026-01-10 14:33:47.220938 | orchestrator | 2026-01-10 14:33:47 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:33:47.222109 | orchestrator | 2026-01-10 14:33:47 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:33:47.223240 | orchestrator | 2026-01-10 14:33:47 | INFO  | Task 626c1919-04db-466c-986d-9b1fd35dcaf1 is in state STARTED 2026-01-10 14:33:47.224371 | orchestrator | 2026-01-10 14:33:47 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:33:47.224399 | orchestrator | 2026-01-10 14:33:47 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:50.263076 | orchestrator | 2026-01-10 14:33:50 | INFO  | Task e39f9cf5-d7e2-4973-9d51-a2b9770afa9b is in state STARTED 2026-01-10 14:33:50.263928 | orchestrator | 2026-01-10 14:33:50 | INFO  | Task 99d53ac9-324f-4868-bf5c-6c4473efd3fb is in state STARTED 2026-01-10 14:33:50.263968 | orchestrator | 2026-01-10 14:33:50 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:33:50.263974 | orchestrator | 2026-01-10 14:33:50 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:33:50.263980 | orchestrator | 2026-01-10 14:33:50 | INFO  | Task 626c1919-04db-466c-986d-9b1fd35dcaf1 is in state STARTED 2026-01-10 14:33:50.263985 | orchestrator | 2026-01-10 14:33:50 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:33:50.263991 | orchestrator | 2026-01-10 14:33:50 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:53.310524 | orchestrator | 2026-01-10 14:33:53 | INFO  | Task e39f9cf5-d7e2-4973-9d51-a2b9770afa9b is in state STARTED 2026-01-10 14:33:53.310591 | orchestrator | 2026-01-10 14:33:53 | INFO  | Task 99d53ac9-324f-4868-bf5c-6c4473efd3fb is in state STARTED 2026-01-10 14:33:53.310597 | orchestrator | 2026-01-10 14:33:53 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:33:53.314295 | orchestrator | 2026-01-10 14:33:53 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:33:53.314360 | orchestrator | 2026-01-10 14:33:53 | INFO  | Task 626c1919-04db-466c-986d-9b1fd35dcaf1 is in state SUCCESS 2026-01-10 14:33:53.315253 | orchestrator | 2026-01-10 14:33:53 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:33:53.315305 | orchestrator | 2026-01-10 14:33:53 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:56.406241 | orchestrator | 2026-01-10 14:33:56 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:33:56.408379 | orchestrator | 2026-01-10 14:33:56 | INFO  | Task e39f9cf5-d7e2-4973-9d51-a2b9770afa9b is in state STARTED 2026-01-10 14:33:56.410573 | orchestrator | 2026-01-10 14:33:56 | INFO  | Task 99d53ac9-324f-4868-bf5c-6c4473efd3fb is in state STARTED 2026-01-10 14:33:56.411684 | orchestrator | 2026-01-10 14:33:56 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:33:56.412332 | orchestrator | 2026-01-10 14:33:56 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:33:56.413689 | orchestrator | 2026-01-10 14:33:56 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:33:56.413735 | orchestrator | 2026-01-10 14:33:56 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:59.469796 | orchestrator | 2026-01-10 14:33:59 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:33:59.469973 | orchestrator | 2026-01-10 14:33:59 | INFO  | Task e39f9cf5-d7e2-4973-9d51-a2b9770afa9b is in state STARTED 2026-01-10 14:33:59.470480 | orchestrator | 2026-01-10 14:33:59 | INFO  | Task 99d53ac9-324f-4868-bf5c-6c4473efd3fb is in state STARTED 2026-01-10 14:33:59.471045 | orchestrator | 2026-01-10 14:33:59 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:33:59.473562 | orchestrator | 2026-01-10 14:33:59 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:33:59.474195 | orchestrator | 2026-01-10 14:33:59 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:33:59.474227 | orchestrator | 2026-01-10 14:33:59 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:02.532583 | orchestrator | 2026-01-10 14:34:02 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:34:02.535272 | orchestrator | 2026-01-10 14:34:02 | INFO  | Task e39f9cf5-d7e2-4973-9d51-a2b9770afa9b is in state SUCCESS 2026-01-10 14:34:02.535413 | orchestrator | 2026-01-10 14:34:02 | INFO  | Task 99d53ac9-324f-4868-bf5c-6c4473efd3fb is in state STARTED 2026-01-10 14:34:02.535423 | orchestrator | 2026-01-10 14:34:02 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:34:02.536550 | orchestrator | 2026-01-10 14:34:02.536617 | orchestrator | 2026-01-10 14:34:02.536631 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:34:02.536643 | orchestrator | 2026-01-10 14:34:02.536652 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:34:02.536662 | orchestrator | Saturday 10 January 2026 14:33:36 +0000 (0:00:00.722) 0:00:00.722 ****** 2026-01-10 14:34:02.536672 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:34:02.536683 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:34:02.536692 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:34:02.536701 | orchestrator | 2026-01-10 14:34:02.536711 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:34:02.536721 | orchestrator | Saturday 10 January 2026 14:33:36 +0000 (0:00:00.600) 0:00:01.323 ****** 2026-01-10 14:34:02.536730 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-01-10 14:34:02.536740 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-01-10 14:34:02.536750 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-01-10 14:34:02.536760 | orchestrator | 2026-01-10 14:34:02.536769 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-01-10 14:34:02.536778 | orchestrator | 2026-01-10 14:34:02.536812 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-01-10 14:34:02.536822 | orchestrator | Saturday 10 January 2026 14:33:37 +0000 (0:00:01.114) 0:00:02.437 ****** 2026-01-10 14:34:02.536831 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:34:02.536894 | orchestrator | 2026-01-10 14:34:02.536905 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-01-10 14:34:02.536915 | orchestrator | Saturday 10 January 2026 14:33:38 +0000 (0:00:00.927) 0:00:03.364 ****** 2026-01-10 14:34:02.536924 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-10 14:34:02.536934 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-10 14:34:02.536943 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-10 14:34:02.536952 | orchestrator | 2026-01-10 14:34:02.536962 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-01-10 14:34:02.536971 | orchestrator | Saturday 10 January 2026 14:33:39 +0000 (0:00:01.080) 0:00:04.445 ****** 2026-01-10 14:34:02.536980 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-10 14:34:02.536989 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-10 14:34:02.537161 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-10 14:34:02.537170 | orchestrator | 2026-01-10 14:34:02.537178 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-01-10 14:34:02.537185 | orchestrator | Saturday 10 January 2026 14:33:42 +0000 (0:00:02.963) 0:00:07.409 ****** 2026-01-10 14:34:02.537193 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:34:02.537200 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:34:02.537206 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:34:02.537213 | orchestrator | 2026-01-10 14:34:02.537220 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-01-10 14:34:02.537227 | orchestrator | Saturday 10 January 2026 14:33:45 +0000 (0:00:02.227) 0:00:09.636 ****** 2026-01-10 14:34:02.537234 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:34:02.537241 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:34:02.537248 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:34:02.537254 | orchestrator | 2026-01-10 14:34:02.537261 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:34:02.537269 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:34:02.537278 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:34:02.537285 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:34:02.537291 | orchestrator | 2026-01-10 14:34:02.537298 | orchestrator | 2026-01-10 14:34:02.537305 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:34:02.537311 | orchestrator | Saturday 10 January 2026 14:33:52 +0000 (0:00:07.217) 0:00:16.858 ****** 2026-01-10 14:34:02.537333 | orchestrator | =============================================================================== 2026-01-10 14:34:02.537340 | orchestrator | memcached : Restart memcached container --------------------------------- 7.22s 2026-01-10 14:34:02.537347 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.96s 2026-01-10 14:34:02.537353 | orchestrator | memcached : Check memcached container ----------------------------------- 2.23s 2026-01-10 14:34:02.537360 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.11s 2026-01-10 14:34:02.537367 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.08s 2026-01-10 14:34:02.537373 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.93s 2026-01-10 14:34:02.537381 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.60s 2026-01-10 14:34:02.537398 | orchestrator | 2026-01-10 14:34:02.537404 | orchestrator | 2026-01-10 14:34:02.537411 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:34:02.537420 | orchestrator | 2026-01-10 14:34:02.537430 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:34:02.537439 | orchestrator | Saturday 10 January 2026 14:33:36 +0000 (0:00:00.563) 0:00:00.563 ****** 2026-01-10 14:34:02.537449 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:34:02.537459 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:34:02.537468 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:34:02.537480 | orchestrator | 2026-01-10 14:34:02.537489 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:34:02.537519 | orchestrator | Saturday 10 January 2026 14:33:37 +0000 (0:00:00.728) 0:00:01.292 ****** 2026-01-10 14:34:02.537530 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-01-10 14:34:02.537539 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-01-10 14:34:02.537549 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-01-10 14:34:02.537558 | orchestrator | 2026-01-10 14:34:02.537568 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-01-10 14:34:02.537577 | orchestrator | 2026-01-10 14:34:02.537587 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-01-10 14:34:02.537596 | orchestrator | Saturday 10 January 2026 14:33:38 +0000 (0:00:00.739) 0:00:02.031 ****** 2026-01-10 14:34:02.537629 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:34:02.537640 | orchestrator | 2026-01-10 14:34:02.537650 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-01-10 14:34:02.537659 | orchestrator | Saturday 10 January 2026 14:33:39 +0000 (0:00:00.976) 0:00:03.007 ****** 2026-01-10 14:34:02.537672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-10 14:34:02.537689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-10 14:34:02.537699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-10 14:34:02.537717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-10 14:34:02.537737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-10 14:34:02.537759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-10 14:34:02.537770 | orchestrator | 2026-01-10 14:34:02.537780 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-01-10 14:34:02.537790 | orchestrator | Saturday 10 January 2026 14:33:40 +0000 (0:00:01.469) 0:00:04.476 ****** 2026-01-10 14:34:02.537801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-10 14:34:02.537808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-10 14:34:02.537814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-10 14:34:02.537820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-10 14:34:02.537837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-10 14:34:02.537908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-10 14:34:02.537919 | orchestrator | 2026-01-10 14:34:02.537929 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-01-10 14:34:02.537940 | orchestrator | Saturday 10 January 2026 14:33:44 +0000 (0:00:03.567) 0:00:08.044 ****** 2026-01-10 14:34:02.537949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-10 14:34:02.537959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-10 14:34:02.537969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-10 14:34:02.537978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-10 14:34:02.538000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-10 14:34:02.538064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-10 14:34:02.538079 | orchestrator | 2026-01-10 14:34:02.538090 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-01-10 14:34:02.538100 | orchestrator | Saturday 10 January 2026 14:33:47 +0000 (0:00:02.808) 0:00:10.852 ****** 2026-01-10 14:34:02.538110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-10 14:34:02.538119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-10 14:34:02.538126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-10 14:34:02.538138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-10 14:34:02.538144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-10 14:34:02.538156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-10 14:34:02.538162 | orchestrator | 2026-01-10 14:34:02.538168 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-10 14:34:02.538174 | orchestrator | Saturday 10 January 2026 14:33:48 +0000 (0:00:01.790) 0:00:12.643 ****** 2026-01-10 14:34:02.538180 | orchestrator | 2026-01-10 14:34:02.538186 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-10 14:34:02.538192 | orchestrator | Saturday 10 January 2026 14:33:49 +0000 (0:00:00.089) 0:00:12.732 ****** 2026-01-10 14:34:02.538198 | orchestrator | 2026-01-10 14:34:02.538204 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-10 14:34:02.538210 | orchestrator | Saturday 10 January 2026 14:33:49 +0000 (0:00:00.065) 0:00:12.797 ****** 2026-01-10 14:34:02.538215 | orchestrator | 2026-01-10 14:34:02.538221 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-01-10 14:34:02.538227 | orchestrator | Saturday 10 January 2026 14:33:49 +0000 (0:00:00.062) 0:00:12.860 ****** 2026-01-10 14:34:02.538233 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:34:02.538239 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:34:02.538245 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:34:02.538250 | orchestrator | 2026-01-10 14:34:02.538263 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-01-10 14:34:02.538269 | orchestrator | Saturday 10 January 2026 14:33:56 +0000 (0:00:07.140) 0:00:20.001 ****** 2026-01-10 14:34:02.538275 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:34:02.538281 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:34:02.538287 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:34:02.538293 | orchestrator | 2026-01-10 14:34:02.538299 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:34:02.538305 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:34:02.538318 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:34:02.538324 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:34:02.538330 | orchestrator | 2026-01-10 14:34:02.538335 | orchestrator | 2026-01-10 14:34:02.538341 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:34:02.538347 | orchestrator | Saturday 10 January 2026 14:34:01 +0000 (0:00:05.196) 0:00:25.197 ****** 2026-01-10 14:34:02.538353 | orchestrator | =============================================================================== 2026-01-10 14:34:02.538359 | orchestrator | redis : Restart redis container ----------------------------------------- 7.14s 2026-01-10 14:34:02.538365 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 5.20s 2026-01-10 14:34:02.538371 | orchestrator | redis : Copying over default config.json files -------------------------- 3.57s 2026-01-10 14:34:02.538377 | orchestrator | redis : Copying over redis config files --------------------------------- 2.81s 2026-01-10 14:34:02.538383 | orchestrator | redis : Check redis containers ------------------------------------------ 1.79s 2026-01-10 14:34:02.538388 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.47s 2026-01-10 14:34:02.538394 | orchestrator | redis : include_tasks --------------------------------------------------- 0.98s 2026-01-10 14:34:02.538400 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.74s 2026-01-10 14:34:02.538406 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.73s 2026-01-10 14:34:02.538412 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.22s 2026-01-10 14:34:02.538418 | orchestrator | 2026-01-10 14:34:02 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:34:02.538424 | orchestrator | 2026-01-10 14:34:02 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:34:02.538434 | orchestrator | 2026-01-10 14:34:02 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:05.566424 | orchestrator | 2026-01-10 14:34:05 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:34:05.566936 | orchestrator | 2026-01-10 14:34:05 | INFO  | Task 99d53ac9-324f-4868-bf5c-6c4473efd3fb is in state STARTED 2026-01-10 14:34:05.567409 | orchestrator | 2026-01-10 14:34:05 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:34:05.569479 | orchestrator | 2026-01-10 14:34:05 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:34:05.570332 | orchestrator | 2026-01-10 14:34:05 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:34:05.570387 | orchestrator | 2026-01-10 14:34:05 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:08.591765 | orchestrator | 2026-01-10 14:34:08 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:34:08.592056 | orchestrator | 2026-01-10 14:34:08 | INFO  | Task 99d53ac9-324f-4868-bf5c-6c4473efd3fb is in state STARTED 2026-01-10 14:34:08.592801 | orchestrator | 2026-01-10 14:34:08 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:34:08.593695 | orchestrator | 2026-01-10 14:34:08 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:34:08.594558 | orchestrator | 2026-01-10 14:34:08 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:34:08.594593 | orchestrator | 2026-01-10 14:34:08 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:11.622305 | orchestrator | 2026-01-10 14:34:11 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:34:11.622394 | orchestrator | 2026-01-10 14:34:11 | INFO  | Task 99d53ac9-324f-4868-bf5c-6c4473efd3fb is in state STARTED 2026-01-10 14:34:11.622409 | orchestrator | 2026-01-10 14:34:11 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:34:11.624538 | orchestrator | 2026-01-10 14:34:11 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:34:11.625130 | orchestrator | 2026-01-10 14:34:11 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:34:11.625169 | orchestrator | 2026-01-10 14:34:11 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:14.708984 | orchestrator | 2026-01-10 14:34:14 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:34:14.710370 | orchestrator | 2026-01-10 14:34:14 | INFO  | Task 99d53ac9-324f-4868-bf5c-6c4473efd3fb is in state STARTED 2026-01-10 14:34:14.710835 | orchestrator | 2026-01-10 14:34:14 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:34:14.711687 | orchestrator | 2026-01-10 14:34:14 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:34:14.712231 | orchestrator | 2026-01-10 14:34:14 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:34:14.712279 | orchestrator | 2026-01-10 14:34:14 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:17.749467 | orchestrator | 2026-01-10 14:34:17 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:34:17.749543 | orchestrator | 2026-01-10 14:34:17 | INFO  | Task 99d53ac9-324f-4868-bf5c-6c4473efd3fb is in state STARTED 2026-01-10 14:34:17.749565 | orchestrator | 2026-01-10 14:34:17 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:34:17.749583 | orchestrator | 2026-01-10 14:34:17 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:34:17.750415 | orchestrator | 2026-01-10 14:34:17 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:34:17.750444 | orchestrator | 2026-01-10 14:34:17 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:20.795480 | orchestrator | 2026-01-10 14:34:20 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:34:20.795589 | orchestrator | 2026-01-10 14:34:20 | INFO  | Task 99d53ac9-324f-4868-bf5c-6c4473efd3fb is in state STARTED 2026-01-10 14:34:20.798447 | orchestrator | 2026-01-10 14:34:20 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:34:20.799362 | orchestrator | 2026-01-10 14:34:20 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:34:20.800003 | orchestrator | 2026-01-10 14:34:20 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:34:20.800029 | orchestrator | 2026-01-10 14:34:20 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:23.844418 | orchestrator | 2026-01-10 14:34:23 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:34:23.844628 | orchestrator | 2026-01-10 14:34:23 | INFO  | Task 99d53ac9-324f-4868-bf5c-6c4473efd3fb is in state STARTED 2026-01-10 14:34:23.845495 | orchestrator | 2026-01-10 14:34:23 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:34:23.846244 | orchestrator | 2026-01-10 14:34:23 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:34:23.847378 | orchestrator | 2026-01-10 14:34:23 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:34:23.847397 | orchestrator | 2026-01-10 14:34:23 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:26.893206 | orchestrator | 2026-01-10 14:34:26 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:34:26.895043 | orchestrator | 2026-01-10 14:34:26 | INFO  | Task 99d53ac9-324f-4868-bf5c-6c4473efd3fb is in state STARTED 2026-01-10 14:34:26.895617 | orchestrator | 2026-01-10 14:34:26 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:34:26.903563 | orchestrator | 2026-01-10 14:34:26 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:34:26.907106 | orchestrator | 2026-01-10 14:34:26 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:34:26.907151 | orchestrator | 2026-01-10 14:34:26 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:29.982817 | orchestrator | 2026-01-10 14:34:29 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:34:29.983376 | orchestrator | 2026-01-10 14:34:29 | INFO  | Task 99d53ac9-324f-4868-bf5c-6c4473efd3fb is in state STARTED 2026-01-10 14:34:29.985812 | orchestrator | 2026-01-10 14:34:29 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:34:29.986811 | orchestrator | 2026-01-10 14:34:29 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:34:29.987426 | orchestrator | 2026-01-10 14:34:29 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:34:29.987447 | orchestrator | 2026-01-10 14:34:29 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:33.032109 | orchestrator | 2026-01-10 14:34:33 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:34:33.033898 | orchestrator | 2026-01-10 14:34:33 | INFO  | Task 99d53ac9-324f-4868-bf5c-6c4473efd3fb is in state STARTED 2026-01-10 14:34:33.034458 | orchestrator | 2026-01-10 14:34:33 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:34:33.035346 | orchestrator | 2026-01-10 14:34:33 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:34:33.036055 | orchestrator | 2026-01-10 14:34:33 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:34:33.036111 | orchestrator | 2026-01-10 14:34:33 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:36.077635 | orchestrator | 2026-01-10 14:34:36 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:34:36.080287 | orchestrator | 2026-01-10 14:34:36 | INFO  | Task 99d53ac9-324f-4868-bf5c-6c4473efd3fb is in state STARTED 2026-01-10 14:34:36.083078 | orchestrator | 2026-01-10 14:34:36 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:34:36.085199 | orchestrator | 2026-01-10 14:34:36 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:34:36.087402 | orchestrator | 2026-01-10 14:34:36 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:34:36.087777 | orchestrator | 2026-01-10 14:34:36 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:39.137571 | orchestrator | 2026-01-10 14:34:39 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:34:39.137872 | orchestrator | 2026-01-10 14:34:39 | INFO  | Task 99d53ac9-324f-4868-bf5c-6c4473efd3fb is in state STARTED 2026-01-10 14:34:39.141521 | orchestrator | 2026-01-10 14:34:39 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:34:39.142141 | orchestrator | 2026-01-10 14:34:39 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:34:39.142830 | orchestrator | 2026-01-10 14:34:39 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:34:39.142856 | orchestrator | 2026-01-10 14:34:39 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:42.173261 | orchestrator | 2026-01-10 14:34:42 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:34:42.175412 | orchestrator | 2026-01-10 14:34:42 | INFO  | Task 99d53ac9-324f-4868-bf5c-6c4473efd3fb is in state SUCCESS 2026-01-10 14:34:42.176275 | orchestrator | 2026-01-10 14:34:42.176321 | orchestrator | 2026-01-10 14:34:42.176328 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:34:42.176333 | orchestrator | 2026-01-10 14:34:42.176337 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:34:42.176342 | orchestrator | Saturday 10 January 2026 14:33:36 +0000 (0:00:00.304) 0:00:00.304 ****** 2026-01-10 14:34:42.176346 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:34:42.176351 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:34:42.176355 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:34:42.176359 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:34:42.176363 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:34:42.176366 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:34:42.176370 | orchestrator | 2026-01-10 14:34:42.176457 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:34:42.176464 | orchestrator | Saturday 10 January 2026 14:33:38 +0000 (0:00:01.517) 0:00:01.821 ****** 2026-01-10 14:34:42.176469 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-10 14:34:42.176473 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-10 14:34:42.176477 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-10 14:34:42.176481 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-10 14:34:42.176485 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-10 14:34:42.176488 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-10 14:34:42.176492 | orchestrator | 2026-01-10 14:34:42.176496 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-01-10 14:34:42.176499 | orchestrator | 2026-01-10 14:34:42.176503 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-01-10 14:34:42.176507 | orchestrator | Saturday 10 January 2026 14:33:39 +0000 (0:00:00.986) 0:00:02.807 ****** 2026-01-10 14:34:42.176511 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:34:42.176517 | orchestrator | 2026-01-10 14:34:42.176521 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-10 14:34:42.176524 | orchestrator | Saturday 10 January 2026 14:33:41 +0000 (0:00:01.900) 0:00:04.708 ****** 2026-01-10 14:34:42.176528 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-10 14:34:42.176532 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-10 14:34:42.176536 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-10 14:34:42.176540 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-10 14:34:42.176543 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-10 14:34:42.176547 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-10 14:34:42.176551 | orchestrator | 2026-01-10 14:34:42.176554 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-10 14:34:42.176574 | orchestrator | Saturday 10 January 2026 14:33:43 +0000 (0:00:02.008) 0:00:06.717 ****** 2026-01-10 14:34:42.176578 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-10 14:34:42.176582 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-10 14:34:42.176586 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-10 14:34:42.176590 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-10 14:34:42.176594 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-10 14:34:42.176597 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-10 14:34:42.176601 | orchestrator | 2026-01-10 14:34:42.176605 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-10 14:34:42.176609 | orchestrator | Saturday 10 January 2026 14:33:45 +0000 (0:00:01.848) 0:00:08.565 ****** 2026-01-10 14:34:42.176612 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-01-10 14:34:42.176616 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:34:42.176620 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-01-10 14:34:42.176624 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:34:42.176628 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-01-10 14:34:42.176631 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:34:42.176635 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-01-10 14:34:42.176639 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:34:42.176643 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-01-10 14:34:42.176647 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:34:42.176650 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-01-10 14:34:42.176654 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:34:42.176658 | orchestrator | 2026-01-10 14:34:42.176662 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-01-10 14:34:42.176665 | orchestrator | Saturday 10 January 2026 14:33:46 +0000 (0:00:01.507) 0:00:10.073 ****** 2026-01-10 14:34:42.176669 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:34:42.176673 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:34:42.176676 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:34:42.176680 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:34:42.176700 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:34:42.176710 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:34:42.176713 | orchestrator | 2026-01-10 14:34:42.176717 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-01-10 14:34:42.176721 | orchestrator | Saturday 10 January 2026 14:33:47 +0000 (0:00:00.874) 0:00:10.948 ****** 2026-01-10 14:34:42.176737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:34:42.176743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:34:42.176752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:34:42.176756 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:34:42.176760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:34:42.176772 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:34:42.176776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:34:42.176780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:34:42.176787 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:34:42.176791 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:34:42.176795 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:34:42.176805 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:34:42.176809 | orchestrator | 2026-01-10 14:34:42.176813 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-01-10 14:34:42.176817 | orchestrator | Saturday 10 January 2026 14:33:49 +0000 (0:00:01.847) 0:00:12.795 ****** 2026-01-10 14:34:42.176821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:34:42.176829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:34:42.176837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:34:42.176886 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:34:42.176899 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:34:42.176910 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:34:42.176922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:34:42.176928 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:34:42.176934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:34:42.176941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:34:42.176954 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:34:42.176962 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:34:42.176974 | orchestrator | 2026-01-10 14:34:42.176981 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-01-10 14:34:42.176988 | orchestrator | Saturday 10 January 2026 14:33:52 +0000 (0:00:03.197) 0:00:15.992 ****** 2026-01-10 14:34:42.176992 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:34:42.176996 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:34:42.177000 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:34:42.177003 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:34:42.177007 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:34:42.177011 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:34:42.177014 | orchestrator | 2026-01-10 14:34:42.177018 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-01-10 14:34:42.177022 | orchestrator | Saturday 10 January 2026 14:33:53 +0000 (0:00:01.179) 0:00:17.172 ****** 2026-01-10 14:34:42.177026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:34:42.177030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:34:42.177034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:34:42.177044 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:34:42.177052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:34:42.177057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:34:42.177060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:34:42.177065 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:34:42.177071 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:34:42.177080 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:34:42.177094 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:34:42.177100 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:34:42.177106 | orchestrator | 2026-01-10 14:34:42.177112 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-10 14:34:42.177122 | orchestrator | Saturday 10 January 2026 14:33:57 +0000 (0:00:03.576) 0:00:20.749 ****** 2026-01-10 14:34:42.177130 | orchestrator | 2026-01-10 14:34:42.177239 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-10 14:34:42.177252 | orchestrator | Saturday 10 January 2026 14:33:57 +0000 (0:00:00.531) 0:00:21.280 ****** 2026-01-10 14:34:42.177258 | orchestrator | 2026-01-10 14:34:42.177265 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-10 14:34:42.177271 | orchestrator | Saturday 10 January 2026 14:33:58 +0000 (0:00:00.378) 0:00:21.659 ****** 2026-01-10 14:34:42.177278 | orchestrator | 2026-01-10 14:34:42.177285 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-10 14:34:42.177291 | orchestrator | Saturday 10 January 2026 14:33:58 +0000 (0:00:00.128) 0:00:21.787 ****** 2026-01-10 14:34:42.177299 | orchestrator | 2026-01-10 14:34:42.177304 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-10 14:34:42.177308 | orchestrator | Saturday 10 January 2026 14:33:58 +0000 (0:00:00.152) 0:00:21.940 ****** 2026-01-10 14:34:42.177313 | orchestrator | 2026-01-10 14:34:42.177317 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-10 14:34:42.177321 | orchestrator | Saturday 10 January 2026 14:33:58 +0000 (0:00:00.134) 0:00:22.074 ****** 2026-01-10 14:34:42.177326 | orchestrator | 2026-01-10 14:34:42.177330 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-01-10 14:34:42.177334 | orchestrator | Saturday 10 January 2026 14:33:58 +0000 (0:00:00.140) 0:00:22.215 ****** 2026-01-10 14:34:42.177338 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:34:42.177343 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:34:42.177347 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:34:42.177351 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:34:42.177361 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:34:42.177366 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:34:42.177370 | orchestrator | 2026-01-10 14:34:42.177375 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-01-10 14:34:42.177382 | orchestrator | Saturday 10 January 2026 14:34:05 +0000 (0:00:06.377) 0:00:28.593 ****** 2026-01-10 14:34:42.177388 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:34:42.177395 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:34:42.177401 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:34:42.177406 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:34:42.177413 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:34:42.177419 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:34:42.177424 | orchestrator | 2026-01-10 14:34:42.177430 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-10 14:34:42.177436 | orchestrator | Saturday 10 January 2026 14:34:06 +0000 (0:00:01.448) 0:00:30.041 ****** 2026-01-10 14:34:42.177444 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:34:42.177450 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:34:42.177456 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:34:42.177461 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:34:42.177467 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:34:42.177479 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:34:42.177486 | orchestrator | 2026-01-10 14:34:42.177493 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-01-10 14:34:42.177500 | orchestrator | Saturday 10 January 2026 14:34:16 +0000 (0:00:10.202) 0:00:40.244 ****** 2026-01-10 14:34:42.177513 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-01-10 14:34:42.177521 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-01-10 14:34:42.177529 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-01-10 14:34:42.177539 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-01-10 14:34:42.177545 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-01-10 14:34:42.177552 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-01-10 14:34:42.177558 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-01-10 14:34:42.177565 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-01-10 14:34:42.177571 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-01-10 14:34:42.177577 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-01-10 14:34:42.177583 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-01-10 14:34:42.177589 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-01-10 14:34:42.177596 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-10 14:34:42.177602 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-10 14:34:42.177608 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-10 14:34:42.177613 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-10 14:34:42.177619 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-10 14:34:42.177633 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-10 14:34:42.177640 | orchestrator | 2026-01-10 14:34:42.177647 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-01-10 14:34:42.177653 | orchestrator | Saturday 10 January 2026 14:34:25 +0000 (0:00:08.757) 0:00:49.001 ****** 2026-01-10 14:34:42.177660 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-01-10 14:34:42.177666 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:34:42.177672 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-01-10 14:34:42.177679 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:34:42.177685 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-01-10 14:34:42.177691 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:34:42.177697 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-01-10 14:34:42.177703 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-01-10 14:34:42.177709 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-01-10 14:34:42.177719 | orchestrator | 2026-01-10 14:34:42.177728 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-01-10 14:34:42.177734 | orchestrator | Saturday 10 January 2026 14:34:28 +0000 (0:00:02.599) 0:00:51.601 ****** 2026-01-10 14:34:42.177740 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-01-10 14:34:42.177746 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:34:42.177753 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-01-10 14:34:42.177759 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:34:42.177766 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-01-10 14:34:42.177772 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:34:42.177779 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-01-10 14:34:42.177784 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-01-10 14:34:42.177788 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-01-10 14:34:42.177792 | orchestrator | 2026-01-10 14:34:42.177795 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-10 14:34:42.177799 | orchestrator | Saturday 10 January 2026 14:34:31 +0000 (0:00:03.700) 0:00:55.301 ****** 2026-01-10 14:34:42.177803 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:34:42.177807 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:34:42.177816 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:34:42.177820 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:34:42.177824 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:34:42.177827 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:34:42.177831 | orchestrator | 2026-01-10 14:34:42.177835 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:34:42.177842 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-10 14:34:42.177925 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-10 14:34:42.177933 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-10 14:34:42.177940 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-10 14:34:42.177946 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-10 14:34:42.177952 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-10 14:34:42.177967 | orchestrator | 2026-01-10 14:34:42.177973 | orchestrator | 2026-01-10 14:34:42.177980 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:34:42.177984 | orchestrator | Saturday 10 January 2026 14:34:40 +0000 (0:00:08.929) 0:01:04.231 ****** 2026-01-10 14:34:42.177988 | orchestrator | =============================================================================== 2026-01-10 14:34:42.177992 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 19.13s 2026-01-10 14:34:42.177996 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.76s 2026-01-10 14:34:42.178000 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 6.38s 2026-01-10 14:34:42.178003 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.70s 2026-01-10 14:34:42.178007 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.58s 2026-01-10 14:34:42.178011 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.20s 2026-01-10 14:34:42.178044 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.60s 2026-01-10 14:34:42.178050 | orchestrator | module-load : Load modules ---------------------------------------------- 2.01s 2026-01-10 14:34:42.178053 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.90s 2026-01-10 14:34:42.178057 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.85s 2026-01-10 14:34:42.178061 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.85s 2026-01-10 14:34:42.178065 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.52s 2026-01-10 14:34:42.178068 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.51s 2026-01-10 14:34:42.178072 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.47s 2026-01-10 14:34:42.178076 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.45s 2026-01-10 14:34:42.178082 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.18s 2026-01-10 14:34:42.178089 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.99s 2026-01-10 14:34:42.178099 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.88s 2026-01-10 14:34:42.178107 | orchestrator | 2026-01-10 14:34:42 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:34:42.178114 | orchestrator | 2026-01-10 14:34:42 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:34:42.178795 | orchestrator | 2026-01-10 14:34:42 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:34:42.178830 | orchestrator | 2026-01-10 14:34:42 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:45.229067 | orchestrator | 2026-01-10 14:34:45 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:34:45.232537 | orchestrator | 2026-01-10 14:34:45 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:34:45.232606 | orchestrator | 2026-01-10 14:34:45 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:34:45.234842 | orchestrator | 2026-01-10 14:34:45 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:34:45.236365 | orchestrator | 2026-01-10 14:34:45 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:34:45.236414 | orchestrator | 2026-01-10 14:34:45 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:48.269363 | orchestrator | 2026-01-10 14:34:48 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:34:48.270655 | orchestrator | 2026-01-10 14:34:48 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:34:48.271873 | orchestrator | 2026-01-10 14:34:48 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:34:48.272990 | orchestrator | 2026-01-10 14:34:48 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:34:48.273990 | orchestrator | 2026-01-10 14:34:48 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:34:48.274070 | orchestrator | 2026-01-10 14:34:48 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:51.345381 | orchestrator | 2026-01-10 14:34:51 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:34:51.346917 | orchestrator | 2026-01-10 14:34:51 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:34:51.350563 | orchestrator | 2026-01-10 14:34:51 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:34:51.351241 | orchestrator | 2026-01-10 14:34:51 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:34:51.351953 | orchestrator | 2026-01-10 14:34:51 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:34:51.351975 | orchestrator | 2026-01-10 14:34:51 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:54.384997 | orchestrator | 2026-01-10 14:34:54 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:34:54.385949 | orchestrator | 2026-01-10 14:34:54 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:34:54.393795 | orchestrator | 2026-01-10 14:34:54 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:34:54.394536 | orchestrator | 2026-01-10 14:34:54 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:34:54.398065 | orchestrator | 2026-01-10 14:34:54 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:34:54.398130 | orchestrator | 2026-01-10 14:34:54 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:57.532599 | orchestrator | 2026-01-10 14:34:57 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:34:57.538433 | orchestrator | 2026-01-10 14:34:57 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:34:57.538538 | orchestrator | 2026-01-10 14:34:57 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:34:57.538550 | orchestrator | 2026-01-10 14:34:57 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:34:57.541124 | orchestrator | 2026-01-10 14:34:57 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:34:57.541202 | orchestrator | 2026-01-10 14:34:57 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:00.573137 | orchestrator | 2026-01-10 14:35:00 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:35:00.584974 | orchestrator | 2026-01-10 14:35:00 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:35:00.585059 | orchestrator | 2026-01-10 14:35:00 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:35:00.585069 | orchestrator | 2026-01-10 14:35:00 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:35:00.585077 | orchestrator | 2026-01-10 14:35:00 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:35:00.585085 | orchestrator | 2026-01-10 14:35:00 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:03.633718 | orchestrator | 2026-01-10 14:35:03 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:35:03.633812 | orchestrator | 2026-01-10 14:35:03 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:35:03.634574 | orchestrator | 2026-01-10 14:35:03 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:35:03.638122 | orchestrator | 2026-01-10 14:35:03 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:35:03.639899 | orchestrator | 2026-01-10 14:35:03 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:35:03.639959 | orchestrator | 2026-01-10 14:35:03 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:06.673809 | orchestrator | 2026-01-10 14:35:06 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:35:06.673947 | orchestrator | 2026-01-10 14:35:06 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:35:06.674379 | orchestrator | 2026-01-10 14:35:06 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:35:06.675215 | orchestrator | 2026-01-10 14:35:06 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:35:06.676377 | orchestrator | 2026-01-10 14:35:06 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:35:06.676542 | orchestrator | 2026-01-10 14:35:06 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:09.779551 | orchestrator | 2026-01-10 14:35:09 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:35:09.779651 | orchestrator | 2026-01-10 14:35:09 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:35:09.780223 | orchestrator | 2026-01-10 14:35:09 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:35:09.786469 | orchestrator | 2026-01-10 14:35:09 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:35:09.786544 | orchestrator | 2026-01-10 14:35:09 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:35:09.786558 | orchestrator | 2026-01-10 14:35:09 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:12.814464 | orchestrator | 2026-01-10 14:35:12 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:35:12.815259 | orchestrator | 2026-01-10 14:35:12 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:35:12.816311 | orchestrator | 2026-01-10 14:35:12 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:35:12.819755 | orchestrator | 2026-01-10 14:35:12 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:35:12.820532 | orchestrator | 2026-01-10 14:35:12 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:35:12.820588 | orchestrator | 2026-01-10 14:35:12 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:15.877661 | orchestrator | 2026-01-10 14:35:15 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:35:15.879997 | orchestrator | 2026-01-10 14:35:15 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:35:15.880181 | orchestrator | 2026-01-10 14:35:15 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:35:15.883624 | orchestrator | 2026-01-10 14:35:15 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:35:15.885527 | orchestrator | 2026-01-10 14:35:15 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:35:15.886052 | orchestrator | 2026-01-10 14:35:15 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:19.122636 | orchestrator | 2026-01-10 14:35:19 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:35:19.122744 | orchestrator | 2026-01-10 14:35:19 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:35:19.123989 | orchestrator | 2026-01-10 14:35:19 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:35:19.124899 | orchestrator | 2026-01-10 14:35:19 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:35:19.126216 | orchestrator | 2026-01-10 14:35:19 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:35:19.126264 | orchestrator | 2026-01-10 14:35:19 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:22.202658 | orchestrator | 2026-01-10 14:35:22 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:35:22.212782 | orchestrator | 2026-01-10 14:35:22 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:35:22.212901 | orchestrator | 2026-01-10 14:35:22 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:35:22.212909 | orchestrator | 2026-01-10 14:35:22 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:35:22.212913 | orchestrator | 2026-01-10 14:35:22 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:35:22.212933 | orchestrator | 2026-01-10 14:35:22 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:25.270619 | orchestrator | 2026-01-10 14:35:25 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:35:25.274255 | orchestrator | 2026-01-10 14:35:25 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:35:25.274325 | orchestrator | 2026-01-10 14:35:25 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:35:25.275335 | orchestrator | 2026-01-10 14:35:25 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state STARTED 2026-01-10 14:35:25.276452 | orchestrator | 2026-01-10 14:35:25 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:35:25.276501 | orchestrator | 2026-01-10 14:35:25 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:28.310427 | orchestrator | 2026-01-10 14:35:28 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:35:28.310579 | orchestrator | 2026-01-10 14:35:28 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:35:28.316423 | orchestrator | 2026-01-10 14:35:28 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:35:28.320745 | orchestrator | 2026-01-10 14:35:28.320805 | orchestrator | 2026-01-10 14:35:28 | INFO  | Task 77b8ae82-f148-4bc9-b927-bd7e170387da is in state SUCCESS 2026-01-10 14:35:28.323925 | orchestrator | 2026-01-10 14:35:28.323973 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-01-10 14:35:28.323980 | orchestrator | 2026-01-10 14:35:28.323986 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-01-10 14:35:28.323992 | orchestrator | Saturday 10 January 2026 14:30:59 +0000 (0:00:00.205) 0:00:00.205 ****** 2026-01-10 14:35:28.323998 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:35:28.324004 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:35:28.324009 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:35:28.324015 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:35:28.324036 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:35:28.324042 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:35:28.324047 | orchestrator | 2026-01-10 14:35:28.324052 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-01-10 14:35:28.324057 | orchestrator | Saturday 10 January 2026 14:30:59 +0000 (0:00:00.855) 0:00:01.061 ****** 2026-01-10 14:35:28.324063 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:35:28.324069 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:35:28.324074 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:35:28.324079 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:28.324084 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:28.324089 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:28.324094 | orchestrator | 2026-01-10 14:35:28.324099 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-01-10 14:35:28.324104 | orchestrator | Saturday 10 January 2026 14:31:00 +0000 (0:00:00.586) 0:00:01.648 ****** 2026-01-10 14:35:28.324109 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:35:28.324114 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:35:28.324119 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:35:28.324124 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:28.324129 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:28.324136 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:28.324144 | orchestrator | 2026-01-10 14:35:28.324154 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-01-10 14:35:28.324167 | orchestrator | Saturday 10 January 2026 14:31:01 +0000 (0:00:00.726) 0:00:02.375 ****** 2026-01-10 14:35:28.324174 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:35:28.324182 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:35:28.324189 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:28.324198 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:35:28.324205 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:28.324213 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:28.324222 | orchestrator | 2026-01-10 14:35:28.324230 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-01-10 14:35:28.324238 | orchestrator | Saturday 10 January 2026 14:31:03 +0000 (0:00:02.157) 0:00:04.532 ****** 2026-01-10 14:35:28.324250 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:35:28.324260 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:35:28.324267 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:35:28.324275 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:28.324283 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:28.324292 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:28.324300 | orchestrator | 2026-01-10 14:35:28.324309 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-01-10 14:35:28.324317 | orchestrator | Saturday 10 January 2026 14:31:05 +0000 (0:00:02.167) 0:00:06.699 ****** 2026-01-10 14:35:28.324325 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:35:28.324334 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:35:28.324343 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:35:28.324351 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:28.324360 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:28.324368 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:28.324375 | orchestrator | 2026-01-10 14:35:28.324381 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-01-10 14:35:28.324386 | orchestrator | Saturday 10 January 2026 14:31:07 +0000 (0:00:01.401) 0:00:08.101 ****** 2026-01-10 14:35:28.324391 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:35:28.324396 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:35:28.324401 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:35:28.324407 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:28.324412 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:28.324417 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:28.324422 | orchestrator | 2026-01-10 14:35:28.324438 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-01-10 14:35:28.324444 | orchestrator | Saturday 10 January 2026 14:31:07 +0000 (0:00:00.739) 0:00:08.840 ****** 2026-01-10 14:35:28.324449 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:35:28.324454 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:35:28.324459 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:35:28.324464 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:28.324469 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:28.324474 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:28.324479 | orchestrator | 2026-01-10 14:35:28.324484 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-01-10 14:35:28.324490 | orchestrator | Saturday 10 January 2026 14:31:08 +0000 (0:00:00.792) 0:00:09.633 ****** 2026-01-10 14:35:28.324495 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-10 14:35:28.324500 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-10 14:35:28.324505 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:35:28.324511 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-10 14:35:28.324517 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-10 14:35:28.324522 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:35:28.324528 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-10 14:35:28.324534 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-10 14:35:28.324539 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-10 14:35:28.324545 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-10 14:35:28.324562 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:35:28.324568 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:28.324573 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-10 14:35:28.324579 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-10 14:35:28.324585 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-10 14:35:28.324590 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:28.324596 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-10 14:35:28.324601 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:28.324607 | orchestrator | 2026-01-10 14:35:28.324613 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-01-10 14:35:28.324618 | orchestrator | Saturday 10 January 2026 14:31:09 +0000 (0:00:00.661) 0:00:10.295 ****** 2026-01-10 14:35:28.324624 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:35:28.324630 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:35:28.324635 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:35:28.324641 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:28.324647 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:28.324652 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:28.324658 | orchestrator | 2026-01-10 14:35:28.324663 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-01-10 14:35:28.324671 | orchestrator | Saturday 10 January 2026 14:31:10 +0000 (0:00:01.317) 0:00:11.612 ****** 2026-01-10 14:35:28.324677 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:35:28.324683 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:35:28.324688 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:35:28.324694 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:35:28.324699 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:35:28.324705 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:35:28.324710 | orchestrator | 2026-01-10 14:35:28.324716 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-01-10 14:35:28.324722 | orchestrator | Saturday 10 January 2026 14:31:11 +0000 (0:00:01.356) 0:00:12.968 ****** 2026-01-10 14:35:28.324732 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:28.324738 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:35:28.324743 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:35:28.324749 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:28.324765 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:28.324771 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:35:28.324777 | orchestrator | 2026-01-10 14:35:28.324783 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-01-10 14:35:28.324795 | orchestrator | Saturday 10 January 2026 14:31:17 +0000 (0:00:05.931) 0:00:18.899 ****** 2026-01-10 14:35:28.324802 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:35:28.324807 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:35:28.324813 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:35:28.324819 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:28.324849 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:28.324855 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:28.324861 | orchestrator | 2026-01-10 14:35:28.324867 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-01-10 14:35:28.324873 | orchestrator | Saturday 10 January 2026 14:31:19 +0000 (0:00:01.373) 0:00:20.273 ****** 2026-01-10 14:35:28.324879 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:35:28.324884 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:35:28.324889 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:35:28.324894 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:28.324899 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:28.324903 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:28.324908 | orchestrator | 2026-01-10 14:35:28.324914 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-01-10 14:35:28.324920 | orchestrator | Saturday 10 January 2026 14:31:20 +0000 (0:00:01.639) 0:00:21.913 ****** 2026-01-10 14:35:28.324925 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:35:28.324930 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:35:28.324935 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:35:28.324940 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:28.324945 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:28.324953 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:28.324959 | orchestrator | 2026-01-10 14:35:28.324964 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-01-10 14:35:28.324969 | orchestrator | Saturday 10 January 2026 14:31:21 +0000 (0:00:01.113) 0:00:23.026 ****** 2026-01-10 14:35:28.324974 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-01-10 14:35:28.324979 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-01-10 14:35:28.324984 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:35:28.324989 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-01-10 14:35:28.324994 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-01-10 14:35:28.324999 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:35:28.325004 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-01-10 14:35:28.325009 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-01-10 14:35:28.325014 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:35:28.325019 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-01-10 14:35:28.325024 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-01-10 14:35:28.325029 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:28.325034 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-01-10 14:35:28.325039 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-01-10 14:35:28.325044 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:28.325049 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-01-10 14:35:28.325054 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-01-10 14:35:28.325064 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:28.325069 | orchestrator | 2026-01-10 14:35:28.325074 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-01-10 14:35:28.325083 | orchestrator | Saturday 10 January 2026 14:31:23 +0000 (0:00:01.407) 0:00:24.433 ****** 2026-01-10 14:35:28.325088 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:35:28.325093 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:35:28.325098 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:28.325103 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:28.325108 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:35:28.325113 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:28.325118 | orchestrator | 2026-01-10 14:35:28.325123 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-01-10 14:35:28.325128 | orchestrator | Saturday 10 January 2026 14:31:24 +0000 (0:00:00.879) 0:00:25.313 ****** 2026-01-10 14:35:28.325134 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:35:28.325143 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:35:28.325156 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:35:28.325164 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:28.325172 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:28.325180 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:28.325187 | orchestrator | 2026-01-10 14:35:28.325195 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-01-10 14:35:28.325203 | orchestrator | 2026-01-10 14:35:28.325212 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-01-10 14:35:28.325221 | orchestrator | Saturday 10 January 2026 14:31:25 +0000 (0:00:01.457) 0:00:26.771 ****** 2026-01-10 14:35:28.325229 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:35:28.325237 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:35:28.325245 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:35:28.325259 | orchestrator | 2026-01-10 14:35:28.325320 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-01-10 14:35:28.325326 | orchestrator | Saturday 10 January 2026 14:31:27 +0000 (0:00:01.829) 0:00:28.601 ****** 2026-01-10 14:35:28.325331 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:35:28.325337 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:35:28.325344 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:35:28.325354 | orchestrator | 2026-01-10 14:35:28.325366 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-01-10 14:35:28.325373 | orchestrator | Saturday 10 January 2026 14:31:28 +0000 (0:00:01.434) 0:00:30.035 ****** 2026-01-10 14:35:28.325381 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:35:28.325389 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:35:28.325397 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:35:28.325405 | orchestrator | 2026-01-10 14:35:28.325411 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-01-10 14:35:28.325419 | orchestrator | Saturday 10 January 2026 14:31:30 +0000 (0:00:01.196) 0:00:31.232 ****** 2026-01-10 14:35:28.325425 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:35:28.325433 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:35:28.325441 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:35:28.325450 | orchestrator | 2026-01-10 14:35:28.325458 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-01-10 14:35:28.325467 | orchestrator | Saturday 10 January 2026 14:31:31 +0000 (0:00:00.919) 0:00:32.151 ****** 2026-01-10 14:35:28.325475 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:28.325484 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:28.325492 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:28.325500 | orchestrator | 2026-01-10 14:35:28.325508 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-01-10 14:35:28.325517 | orchestrator | Saturday 10 January 2026 14:31:31 +0000 (0:00:00.344) 0:00:32.496 ****** 2026-01-10 14:35:28.325522 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:28.325527 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:28.325538 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:28.325544 | orchestrator | 2026-01-10 14:35:28.325549 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-01-10 14:35:28.325554 | orchestrator | Saturday 10 January 2026 14:31:33 +0000 (0:00:02.138) 0:00:34.635 ****** 2026-01-10 14:35:28.325559 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:28.325564 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:28.325569 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:28.325574 | orchestrator | 2026-01-10 14:35:28.325579 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-01-10 14:35:28.325584 | orchestrator | Saturday 10 January 2026 14:31:35 +0000 (0:00:02.320) 0:00:36.955 ****** 2026-01-10 14:35:28.325593 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:35:28.325599 | orchestrator | 2026-01-10 14:35:28.325604 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-01-10 14:35:28.325609 | orchestrator | Saturday 10 January 2026 14:31:36 +0000 (0:00:00.620) 0:00:37.576 ****** 2026-01-10 14:35:28.325614 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:35:28.325619 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:35:28.325624 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:35:28.325629 | orchestrator | 2026-01-10 14:35:28.325634 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-01-10 14:35:28.325639 | orchestrator | Saturday 10 January 2026 14:31:39 +0000 (0:00:03.211) 0:00:40.788 ****** 2026-01-10 14:35:28.325644 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:28.325649 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:28.325654 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:28.325659 | orchestrator | 2026-01-10 14:35:28.325664 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-01-10 14:35:28.325669 | orchestrator | Saturday 10 January 2026 14:31:40 +0000 (0:00:00.902) 0:00:41.690 ****** 2026-01-10 14:35:28.325674 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:28.325679 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:28.325684 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:28.325689 | orchestrator | 2026-01-10 14:35:28.325695 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-01-10 14:35:28.325700 | orchestrator | Saturday 10 January 2026 14:31:41 +0000 (0:00:01.292) 0:00:42.982 ****** 2026-01-10 14:35:28.325705 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:28.325710 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:28.325715 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:28.325720 | orchestrator | 2026-01-10 14:35:28.325725 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-01-10 14:35:28.325736 | orchestrator | Saturday 10 January 2026 14:31:43 +0000 (0:00:01.770) 0:00:44.753 ****** 2026-01-10 14:35:28.325741 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:28.325746 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:28.325751 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:28.325756 | orchestrator | 2026-01-10 14:35:28.325761 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-01-10 14:35:28.325766 | orchestrator | Saturday 10 January 2026 14:31:44 +0000 (0:00:00.666) 0:00:45.419 ****** 2026-01-10 14:35:28.325771 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:28.325777 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:28.325782 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:28.325786 | orchestrator | 2026-01-10 14:35:28.325792 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-01-10 14:35:28.325797 | orchestrator | Saturday 10 January 2026 14:31:45 +0000 (0:00:00.713) 0:00:46.133 ****** 2026-01-10 14:35:28.325802 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:28.325807 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:28.325812 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:28.325821 | orchestrator | 2026-01-10 14:35:28.325842 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-01-10 14:35:28.325848 | orchestrator | Saturday 10 January 2026 14:31:46 +0000 (0:00:01.846) 0:00:47.979 ****** 2026-01-10 14:35:28.325853 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:35:28.325858 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:35:28.325863 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:35:28.325868 | orchestrator | 2026-01-10 14:35:28.325873 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-01-10 14:35:28.325878 | orchestrator | Saturday 10 January 2026 14:31:49 +0000 (0:00:02.327) 0:00:50.307 ****** 2026-01-10 14:35:28.325883 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:35:28.325888 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:35:28.325893 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:35:28.325898 | orchestrator | 2026-01-10 14:35:28.325904 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-01-10 14:35:28.325909 | orchestrator | Saturday 10 January 2026 14:31:50 +0000 (0:00:01.119) 0:00:51.426 ****** 2026-01-10 14:35:28.325914 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-10 14:35:28.325920 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-10 14:35:28.325925 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-10 14:35:28.325931 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-10 14:35:28.325936 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-10 14:35:28.325941 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-10 14:35:28.325946 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-10 14:35:28.325951 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-10 14:35:28.325956 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-10 14:35:28.325964 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-10 14:35:28.325996 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-10 14:35:28.326001 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-10 14:35:28.326007 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:35:28.326052 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:35:28.326060 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:35:28.326065 | orchestrator | 2026-01-10 14:35:28.326070 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-01-10 14:35:28.326075 | orchestrator | Saturday 10 January 2026 14:32:33 +0000 (0:00:43.363) 0:01:34.790 ****** 2026-01-10 14:35:28.326080 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:28.326089 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:28.326097 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:28.326105 | orchestrator | 2026-01-10 14:35:28.326114 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-01-10 14:35:28.326122 | orchestrator | Saturday 10 January 2026 14:32:33 +0000 (0:00:00.259) 0:01:35.049 ****** 2026-01-10 14:35:28.326135 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:28.326144 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:28.326151 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:28.326159 | orchestrator | 2026-01-10 14:35:28.326167 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-01-10 14:35:28.326176 | orchestrator | Saturday 10 January 2026 14:32:35 +0000 (0:00:01.073) 0:01:36.122 ****** 2026-01-10 14:35:28.326184 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:28.326192 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:28.326200 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:28.326209 | orchestrator | 2026-01-10 14:35:28.326225 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-01-10 14:35:28.326235 | orchestrator | Saturday 10 January 2026 14:32:36 +0000 (0:00:01.247) 0:01:37.370 ****** 2026-01-10 14:35:28.326243 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:28.326251 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:28.326259 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:28.326267 | orchestrator | 2026-01-10 14:35:28.326276 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-01-10 14:35:28.326284 | orchestrator | Saturday 10 January 2026 14:33:00 +0000 (0:00:24.279) 0:02:01.649 ****** 2026-01-10 14:35:28.326293 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:35:28.326301 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:35:28.326310 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:35:28.326318 | orchestrator | 2026-01-10 14:35:28.326327 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-01-10 14:35:28.326335 | orchestrator | Saturday 10 January 2026 14:33:01 +0000 (0:00:00.693) 0:02:02.343 ****** 2026-01-10 14:35:28.326343 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:35:28.326352 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:35:28.326360 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:35:28.326370 | orchestrator | 2026-01-10 14:35:28.326379 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-01-10 14:35:28.326387 | orchestrator | Saturday 10 January 2026 14:33:01 +0000 (0:00:00.573) 0:02:02.916 ****** 2026-01-10 14:35:28.326395 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:28.326400 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:28.326405 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:28.326410 | orchestrator | 2026-01-10 14:35:28.326415 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-01-10 14:35:28.326420 | orchestrator | Saturday 10 January 2026 14:33:02 +0000 (0:00:00.554) 0:02:03.471 ****** 2026-01-10 14:35:28.326425 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:35:28.326430 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:35:28.326435 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:35:28.326440 | orchestrator | 2026-01-10 14:35:28.326445 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-01-10 14:35:28.326450 | orchestrator | Saturday 10 January 2026 14:33:03 +0000 (0:00:00.746) 0:02:04.218 ****** 2026-01-10 14:35:28.326455 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:35:28.326460 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:35:28.326465 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:35:28.326470 | orchestrator | 2026-01-10 14:35:28.326475 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-01-10 14:35:28.326480 | orchestrator | Saturday 10 January 2026 14:33:03 +0000 (0:00:00.279) 0:02:04.497 ****** 2026-01-10 14:35:28.326485 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:28.326490 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:28.326495 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:28.326500 | orchestrator | 2026-01-10 14:35:28.326505 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-01-10 14:35:28.326510 | orchestrator | Saturday 10 January 2026 14:33:04 +0000 (0:00:00.626) 0:02:05.124 ****** 2026-01-10 14:35:28.326515 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:28.326520 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:28.326531 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:28.326536 | orchestrator | 2026-01-10 14:35:28.326541 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-01-10 14:35:28.326546 | orchestrator | Saturday 10 January 2026 14:33:04 +0000 (0:00:00.600) 0:02:05.724 ****** 2026-01-10 14:35:28.326551 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:28.326556 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:28.326561 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:28.326566 | orchestrator | 2026-01-10 14:35:28.326574 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-01-10 14:35:28.326582 | orchestrator | Saturday 10 January 2026 14:33:05 +0000 (0:00:01.007) 0:02:06.731 ****** 2026-01-10 14:35:28.326589 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:28.326597 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:28.326606 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:28.326615 | orchestrator | 2026-01-10 14:35:28.326623 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-01-10 14:35:28.326631 | orchestrator | Saturday 10 January 2026 14:33:06 +0000 (0:00:00.800) 0:02:07.532 ****** 2026-01-10 14:35:28.326645 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:28.326651 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:28.326656 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:28.326661 | orchestrator | 2026-01-10 14:35:28.326666 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-01-10 14:35:28.326671 | orchestrator | Saturday 10 January 2026 14:33:06 +0000 (0:00:00.294) 0:02:07.827 ****** 2026-01-10 14:35:28.326676 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:28.326681 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:28.326686 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:28.326691 | orchestrator | 2026-01-10 14:35:28.326696 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-01-10 14:35:28.326701 | orchestrator | Saturday 10 January 2026 14:33:07 +0000 (0:00:00.293) 0:02:08.121 ****** 2026-01-10 14:35:28.326706 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:35:28.326711 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:35:28.326716 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:35:28.326721 | orchestrator | 2026-01-10 14:35:28.326726 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-01-10 14:35:28.326731 | orchestrator | Saturday 10 January 2026 14:33:07 +0000 (0:00:00.839) 0:02:08.960 ****** 2026-01-10 14:35:28.326736 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:35:28.326741 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:35:28.326746 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:35:28.326751 | orchestrator | 2026-01-10 14:35:28.326757 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-01-10 14:35:28.326762 | orchestrator | Saturday 10 January 2026 14:33:08 +0000 (0:00:00.635) 0:02:09.596 ****** 2026-01-10 14:35:28.326767 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-10 14:35:28.326778 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-10 14:35:28.326783 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-10 14:35:28.326789 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-10 14:35:28.326794 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-10 14:35:28.326799 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-10 14:35:28.326804 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-10 14:35:28.326809 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-10 14:35:28.326814 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-10 14:35:28.326901 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-10 14:35:28.326913 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-01-10 14:35:28.326922 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-10 14:35:28.326930 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-10 14:35:28.326938 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-10 14:35:28.326947 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-01-10 14:35:28.326955 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-10 14:35:28.326961 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-10 14:35:28.326966 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-10 14:35:28.326971 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-10 14:35:28.326976 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-10 14:35:28.326981 | orchestrator | 2026-01-10 14:35:28.326986 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-01-10 14:35:28.326991 | orchestrator | 2026-01-10 14:35:28.326996 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-01-10 14:35:28.327001 | orchestrator | Saturday 10 January 2026 14:33:11 +0000 (0:00:03.109) 0:02:12.706 ****** 2026-01-10 14:35:28.327006 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:35:28.327011 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:35:28.327016 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:35:28.327021 | orchestrator | 2026-01-10 14:35:28.327026 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-01-10 14:35:28.327031 | orchestrator | Saturday 10 January 2026 14:33:12 +0000 (0:00:00.507) 0:02:13.213 ****** 2026-01-10 14:35:28.327061 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:35:28.327067 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:35:28.327072 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:35:28.327077 | orchestrator | 2026-01-10 14:35:28.327082 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-01-10 14:35:28.327087 | orchestrator | Saturday 10 January 2026 14:33:12 +0000 (0:00:00.613) 0:02:13.827 ****** 2026-01-10 14:35:28.327092 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:35:28.327097 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:35:28.327102 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:35:28.327107 | orchestrator | 2026-01-10 14:35:28.327113 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-01-10 14:35:28.327126 | orchestrator | Saturday 10 January 2026 14:33:13 +0000 (0:00:00.350) 0:02:14.178 ****** 2026-01-10 14:35:28.327131 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:35:28.327136 | orchestrator | 2026-01-10 14:35:28.327154 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-01-10 14:35:28.327159 | orchestrator | Saturday 10 January 2026 14:33:13 +0000 (0:00:00.693) 0:02:14.871 ****** 2026-01-10 14:35:28.327165 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:35:28.327173 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:35:28.327181 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:35:28.327189 | orchestrator | 2026-01-10 14:35:28.327197 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-01-10 14:35:28.327205 | orchestrator | Saturday 10 January 2026 14:33:14 +0000 (0:00:00.320) 0:02:15.192 ****** 2026-01-10 14:35:28.327212 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:35:28.327226 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:35:28.327235 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:35:28.327243 | orchestrator | 2026-01-10 14:35:28.327252 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-01-10 14:35:28.327260 | orchestrator | Saturday 10 January 2026 14:33:14 +0000 (0:00:00.277) 0:02:15.469 ****** 2026-01-10 14:35:28.327268 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:35:28.327276 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:35:28.327284 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:35:28.327292 | orchestrator | 2026-01-10 14:35:28.327300 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-01-10 14:35:28.327308 | orchestrator | Saturday 10 January 2026 14:33:14 +0000 (0:00:00.258) 0:02:15.727 ****** 2026-01-10 14:35:28.327316 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:35:28.327324 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:35:28.327332 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:35:28.327340 | orchestrator | 2026-01-10 14:35:28.327358 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-01-10 14:35:28.327367 | orchestrator | Saturday 10 January 2026 14:33:15 +0000 (0:00:00.641) 0:02:16.368 ****** 2026-01-10 14:35:28.327377 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:35:28.327382 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:35:28.327387 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:35:28.327392 | orchestrator | 2026-01-10 14:35:28.327397 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-01-10 14:35:28.327402 | orchestrator | Saturday 10 January 2026 14:33:16 +0000 (0:00:01.243) 0:02:17.611 ****** 2026-01-10 14:35:28.327407 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:35:28.327412 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:35:28.327417 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:35:28.327422 | orchestrator | 2026-01-10 14:35:28.327428 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-01-10 14:35:28.327433 | orchestrator | Saturday 10 January 2026 14:33:17 +0000 (0:00:01.278) 0:02:18.890 ****** 2026-01-10 14:35:28.327438 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:35:28.327443 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:35:28.327448 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:35:28.327453 | orchestrator | 2026-01-10 14:35:28.327458 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-10 14:35:28.327463 | orchestrator | 2026-01-10 14:35:28.327468 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-10 14:35:28.327473 | orchestrator | Saturday 10 January 2026 14:33:27 +0000 (0:00:09.556) 0:02:28.446 ****** 2026-01-10 14:35:28.327478 | orchestrator | ok: [testbed-manager] 2026-01-10 14:35:28.327483 | orchestrator | 2026-01-10 14:35:28.327490 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-10 14:35:28.327499 | orchestrator | Saturday 10 January 2026 14:33:28 +0000 (0:00:00.823) 0:02:29.270 ****** 2026-01-10 14:35:28.327507 | orchestrator | changed: [testbed-manager] 2026-01-10 14:35:28.327515 | orchestrator | 2026-01-10 14:35:28.327523 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-10 14:35:28.327532 | orchestrator | Saturday 10 January 2026 14:33:28 +0000 (0:00:00.553) 0:02:29.823 ****** 2026-01-10 14:35:28.327538 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-10 14:35:28.327543 | orchestrator | 2026-01-10 14:35:28.327548 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-10 14:35:28.327553 | orchestrator | Saturday 10 January 2026 14:33:29 +0000 (0:00:00.600) 0:02:30.423 ****** 2026-01-10 14:35:28.327558 | orchestrator | changed: [testbed-manager] 2026-01-10 14:35:28.327563 | orchestrator | 2026-01-10 14:35:28.327568 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-10 14:35:28.327573 | orchestrator | Saturday 10 January 2026 14:33:30 +0000 (0:00:01.174) 0:02:31.598 ****** 2026-01-10 14:35:28.327578 | orchestrator | changed: [testbed-manager] 2026-01-10 14:35:28.327588 | orchestrator | 2026-01-10 14:35:28.327594 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-10 14:35:28.327599 | orchestrator | Saturday 10 January 2026 14:33:31 +0000 (0:00:00.665) 0:02:32.264 ****** 2026-01-10 14:35:28.327604 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-10 14:35:28.327609 | orchestrator | 2026-01-10 14:35:28.327614 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-10 14:35:28.327619 | orchestrator | Saturday 10 January 2026 14:33:32 +0000 (0:00:01.576) 0:02:33.841 ****** 2026-01-10 14:35:28.327624 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-10 14:35:28.327629 | orchestrator | 2026-01-10 14:35:28.327634 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-10 14:35:28.327639 | orchestrator | Saturday 10 January 2026 14:33:33 +0000 (0:00:00.785) 0:02:34.627 ****** 2026-01-10 14:35:28.327644 | orchestrator | changed: [testbed-manager] 2026-01-10 14:35:28.327650 | orchestrator | 2026-01-10 14:35:28.327655 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-10 14:35:28.327660 | orchestrator | Saturday 10 January 2026 14:33:33 +0000 (0:00:00.380) 0:02:35.007 ****** 2026-01-10 14:35:28.327665 | orchestrator | changed: [testbed-manager] 2026-01-10 14:35:28.327670 | orchestrator | 2026-01-10 14:35:28.327675 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-01-10 14:35:28.327680 | orchestrator | 2026-01-10 14:35:28.327686 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-01-10 14:35:28.327691 | orchestrator | Saturday 10 January 2026 14:33:34 +0000 (0:00:00.581) 0:02:35.589 ****** 2026-01-10 14:35:28.327696 | orchestrator | ok: [testbed-manager] 2026-01-10 14:35:28.327701 | orchestrator | 2026-01-10 14:35:28.327707 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-01-10 14:35:28.327712 | orchestrator | Saturday 10 January 2026 14:33:34 +0000 (0:00:00.122) 0:02:35.712 ****** 2026-01-10 14:35:28.327717 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-01-10 14:35:28.327722 | orchestrator | 2026-01-10 14:35:28.327728 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-01-10 14:35:28.327733 | orchestrator | Saturday 10 January 2026 14:33:34 +0000 (0:00:00.195) 0:02:35.907 ****** 2026-01-10 14:35:28.327738 | orchestrator | ok: [testbed-manager] 2026-01-10 14:35:28.327743 | orchestrator | 2026-01-10 14:35:28.327749 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-01-10 14:35:28.327754 | orchestrator | Saturday 10 January 2026 14:33:35 +0000 (0:00:00.991) 0:02:36.899 ****** 2026-01-10 14:35:28.327759 | orchestrator | ok: [testbed-manager] 2026-01-10 14:35:28.327764 | orchestrator | 2026-01-10 14:35:28.327769 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-01-10 14:35:28.327774 | orchestrator | Saturday 10 January 2026 14:33:37 +0000 (0:00:01.653) 0:02:38.552 ****** 2026-01-10 14:35:28.327779 | orchestrator | changed: [testbed-manager] 2026-01-10 14:35:28.327784 | orchestrator | 2026-01-10 14:35:28.327789 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-01-10 14:35:28.327794 | orchestrator | Saturday 10 January 2026 14:33:39 +0000 (0:00:01.804) 0:02:40.356 ****** 2026-01-10 14:35:28.327799 | orchestrator | ok: [testbed-manager] 2026-01-10 14:35:28.327804 | orchestrator | 2026-01-10 14:35:28.327814 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-01-10 14:35:28.327819 | orchestrator | Saturday 10 January 2026 14:33:39 +0000 (0:00:00.685) 0:02:41.042 ****** 2026-01-10 14:35:28.327839 | orchestrator | changed: [testbed-manager] 2026-01-10 14:35:28.327846 | orchestrator | 2026-01-10 14:35:28.327851 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-01-10 14:35:28.327857 | orchestrator | Saturday 10 January 2026 14:33:47 +0000 (0:00:07.071) 0:02:48.113 ****** 2026-01-10 14:35:28.327861 | orchestrator | changed: [testbed-manager] 2026-01-10 14:35:28.327866 | orchestrator | 2026-01-10 14:35:28.327871 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-01-10 14:35:28.327879 | orchestrator | Saturday 10 January 2026 14:34:02 +0000 (0:00:15.251) 0:03:03.365 ****** 2026-01-10 14:35:28.327884 | orchestrator | ok: [testbed-manager] 2026-01-10 14:35:28.327890 | orchestrator | 2026-01-10 14:35:28.327895 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-01-10 14:35:28.327900 | orchestrator | 2026-01-10 14:35:28.327905 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-01-10 14:35:28.327909 | orchestrator | Saturday 10 January 2026 14:34:02 +0000 (0:00:00.491) 0:03:03.856 ****** 2026-01-10 14:35:28.327915 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:35:28.327920 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:35:28.327925 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:35:28.327930 | orchestrator | 2026-01-10 14:35:28.327935 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-01-10 14:35:28.327940 | orchestrator | Saturday 10 January 2026 14:34:03 +0000 (0:00:00.287) 0:03:04.143 ****** 2026-01-10 14:35:28.327945 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:28.327950 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:28.327955 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:28.327960 | orchestrator | 2026-01-10 14:35:28.327965 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-01-10 14:35:28.327970 | orchestrator | Saturday 10 January 2026 14:34:03 +0000 (0:00:00.291) 0:03:04.435 ****** 2026-01-10 14:35:28.327975 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:35:28.327980 | orchestrator | 2026-01-10 14:35:28.327985 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-01-10 14:35:28.327990 | orchestrator | Saturday 10 January 2026 14:34:03 +0000 (0:00:00.603) 0:03:05.039 ****** 2026-01-10 14:35:28.327995 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-10 14:35:28.328000 | orchestrator | 2026-01-10 14:35:28.328006 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-01-10 14:35:28.328014 | orchestrator | Saturday 10 January 2026 14:34:04 +0000 (0:00:00.765) 0:03:05.804 ****** 2026-01-10 14:35:28.328021 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:35:28.328030 | orchestrator | 2026-01-10 14:35:28.328039 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-01-10 14:35:28.328048 | orchestrator | Saturday 10 January 2026 14:34:06 +0000 (0:00:01.783) 0:03:07.587 ****** 2026-01-10 14:35:28.328056 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:28.328065 | orchestrator | 2026-01-10 14:35:28.328071 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-01-10 14:35:28.328076 | orchestrator | Saturday 10 January 2026 14:34:06 +0000 (0:00:00.133) 0:03:07.721 ****** 2026-01-10 14:35:28.328081 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:35:28.328086 | orchestrator | 2026-01-10 14:35:28.328091 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-01-10 14:35:28.328096 | orchestrator | Saturday 10 January 2026 14:34:07 +0000 (0:00:00.976) 0:03:08.698 ****** 2026-01-10 14:35:28.328101 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:28.328106 | orchestrator | 2026-01-10 14:35:28.328571 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-01-10 14:35:28.328606 | orchestrator | Saturday 10 January 2026 14:34:07 +0000 (0:00:00.093) 0:03:08.791 ****** 2026-01-10 14:35:28.328614 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:28.328622 | orchestrator | 2026-01-10 14:35:28.328630 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-01-10 14:35:28.328638 | orchestrator | Saturday 10 January 2026 14:34:07 +0000 (0:00:00.104) 0:03:08.896 ****** 2026-01-10 14:35:28.328646 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:28.328654 | orchestrator | 2026-01-10 14:35:28.328662 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-01-10 14:35:28.328680 | orchestrator | Saturday 10 January 2026 14:34:07 +0000 (0:00:00.100) 0:03:08.996 ****** 2026-01-10 14:35:28.328693 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:28.328702 | orchestrator | 2026-01-10 14:35:28.328710 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-01-10 14:35:28.328718 | orchestrator | Saturday 10 January 2026 14:34:08 +0000 (0:00:00.103) 0:03:09.100 ****** 2026-01-10 14:35:28.328723 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-10 14:35:28.328728 | orchestrator | 2026-01-10 14:35:28.328733 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-01-10 14:35:28.328738 | orchestrator | Saturday 10 January 2026 14:34:12 +0000 (0:00:04.783) 0:03:13.883 ****** 2026-01-10 14:35:28.328744 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-01-10 14:35:28.328749 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-01-10 14:35:28.328754 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-01-10 14:35:28.328760 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-01-10 14:35:28.328765 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-01-10 14:35:28.328770 | orchestrator | 2026-01-10 14:35:28.328775 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-01-10 14:35:28.328780 | orchestrator | Saturday 10 January 2026 14:34:57 +0000 (0:00:44.571) 0:03:58.454 ****** 2026-01-10 14:35:28.328794 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:35:28.328800 | orchestrator | 2026-01-10 14:35:28.328805 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-01-10 14:35:28.328810 | orchestrator | Saturday 10 January 2026 14:34:58 +0000 (0:00:01.364) 0:03:59.819 ****** 2026-01-10 14:35:28.328815 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-10 14:35:28.328820 | orchestrator | 2026-01-10 14:35:28.328874 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-01-10 14:35:28.328882 | orchestrator | Saturday 10 January 2026 14:35:00 +0000 (0:00:01.660) 0:04:01.479 ****** 2026-01-10 14:35:28.328887 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-10 14:35:28.328892 | orchestrator | 2026-01-10 14:35:28.328897 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-01-10 14:35:28.328903 | orchestrator | Saturday 10 January 2026 14:35:01 +0000 (0:00:01.208) 0:04:02.688 ****** 2026-01-10 14:35:28.328908 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:28.328913 | orchestrator | 2026-01-10 14:35:28.328918 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-01-10 14:35:28.328923 | orchestrator | Saturday 10 January 2026 14:35:01 +0000 (0:00:00.124) 0:04:02.813 ****** 2026-01-10 14:35:28.328928 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-01-10 14:35:28.328933 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-01-10 14:35:28.328938 | orchestrator | 2026-01-10 14:35:28.328943 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-01-10 14:35:28.328948 | orchestrator | Saturday 10 January 2026 14:35:03 +0000 (0:00:02.041) 0:04:04.854 ****** 2026-01-10 14:35:28.328953 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:28.328958 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:28.328963 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:28.328968 | orchestrator | 2026-01-10 14:35:28.328973 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-01-10 14:35:28.328978 | orchestrator | Saturday 10 January 2026 14:35:04 +0000 (0:00:00.370) 0:04:05.225 ****** 2026-01-10 14:35:28.328983 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:35:28.328988 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:35:28.328993 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:35:28.328998 | orchestrator | 2026-01-10 14:35:28.329004 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-01-10 14:35:28.329014 | orchestrator | 2026-01-10 14:35:28.329019 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-01-10 14:35:28.329024 | orchestrator | Saturday 10 January 2026 14:35:05 +0000 (0:00:01.295) 0:04:06.520 ****** 2026-01-10 14:35:28.329029 | orchestrator | ok: [testbed-manager] 2026-01-10 14:35:28.329034 | orchestrator | 2026-01-10 14:35:28.329038 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-01-10 14:35:28.329043 | orchestrator | Saturday 10 January 2026 14:35:05 +0000 (0:00:00.141) 0:04:06.662 ****** 2026-01-10 14:35:28.329048 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-01-10 14:35:28.329052 | orchestrator | 2026-01-10 14:35:28.329057 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-01-10 14:35:28.329062 | orchestrator | Saturday 10 January 2026 14:35:05 +0000 (0:00:00.226) 0:04:06.888 ****** 2026-01-10 14:35:28.329067 | orchestrator | changed: [testbed-manager] 2026-01-10 14:35:28.329072 | orchestrator | 2026-01-10 14:35:28.329077 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-01-10 14:35:28.329081 | orchestrator | 2026-01-10 14:35:28.329086 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-01-10 14:35:28.329091 | orchestrator | Saturday 10 January 2026 14:35:11 +0000 (0:00:05.730) 0:04:12.619 ****** 2026-01-10 14:35:28.329141 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:35:28.329146 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:35:28.329151 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:35:28.329158 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:35:28.329165 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:35:28.329173 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:35:28.329181 | orchestrator | 2026-01-10 14:35:28.329189 | orchestrator | TASK [Manage labels] *********************************************************** 2026-01-10 14:35:28.329197 | orchestrator | Saturday 10 January 2026 14:35:12 +0000 (0:00:00.844) 0:04:13.464 ****** 2026-01-10 14:35:28.329205 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-10 14:35:28.329213 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-10 14:35:28.329225 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-10 14:35:28.329231 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-10 14:35:28.329239 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-10 14:35:28.329246 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-10 14:35:28.329259 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-10 14:35:28.329267 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-10 14:35:28.329275 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-10 14:35:28.329322 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-10 14:35:28.329329 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-10 14:35:28.329337 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-10 14:35:28.329352 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-10 14:35:28.329359 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-10 14:35:28.329367 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-10 14:35:28.329375 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-10 14:35:28.329383 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-10 14:35:28.329398 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-10 14:35:28.329406 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-10 14:35:28.329414 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-10 14:35:28.329421 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-10 14:35:28.329429 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-10 14:35:28.329436 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-10 14:35:28.329443 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-10 14:35:28.329452 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-10 14:35:28.329460 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-10 14:35:28.329467 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-10 14:35:28.329474 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-10 14:35:28.329482 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-10 14:35:28.329518 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-10 14:35:28.329527 | orchestrator | 2026-01-10 14:35:28.329535 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-01-10 14:35:28.329544 | orchestrator | Saturday 10 January 2026 14:35:25 +0000 (0:00:13.352) 0:04:26.816 ****** 2026-01-10 14:35:28.329551 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:35:28.329559 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:35:28.329567 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:35:28.329575 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:28.329583 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:28.329591 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:28.329599 | orchestrator | 2026-01-10 14:35:28.329607 | orchestrator | TASK [Manage taints] *********************************************************** 2026-01-10 14:35:28.329615 | orchestrator | Saturday 10 January 2026 14:35:26 +0000 (0:00:00.707) 0:04:27.523 ****** 2026-01-10 14:35:28.329623 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:35:28.329631 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:35:28.329639 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:35:28.329647 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:28.329655 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:28.329662 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:28.329670 | orchestrator | 2026-01-10 14:35:28.329677 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:35:28.329686 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:35:28.329697 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-10 14:35:28.329706 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-10 14:35:28.329714 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-10 14:35:28.329728 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-10 14:35:28.329736 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-10 14:35:28.329752 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-10 14:35:28.329760 | orchestrator | 2026-01-10 14:35:28.329768 | orchestrator | 2026-01-10 14:35:28.329777 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:35:28.329785 | orchestrator | Saturday 10 January 2026 14:35:26 +0000 (0:00:00.497) 0:04:28.020 ****** 2026-01-10 14:35:28.329793 | orchestrator | =============================================================================== 2026-01-10 14:35:28.329801 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 44.57s 2026-01-10 14:35:28.329809 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.36s 2026-01-10 14:35:28.329818 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 24.28s 2026-01-10 14:35:28.329851 | orchestrator | kubectl : Install required packages ------------------------------------ 15.25s 2026-01-10 14:35:28.329860 | orchestrator | Manage labels ---------------------------------------------------------- 13.35s 2026-01-10 14:35:28.329868 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 9.56s 2026-01-10 14:35:28.329875 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.07s 2026-01-10 14:35:28.329883 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.93s 2026-01-10 14:35:28.329891 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.73s 2026-01-10 14:35:28.329899 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.78s 2026-01-10 14:35:28.329907 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 3.21s 2026-01-10 14:35:28.329915 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.11s 2026-01-10 14:35:28.329923 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.33s 2026-01-10 14:35:28.329931 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 2.32s 2026-01-10 14:35:28.329938 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 2.17s 2026-01-10 14:35:28.329946 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.16s 2026-01-10 14:35:28.329954 | orchestrator | k3s_server : Create /etc/rancher/k3s directory -------------------------- 2.14s 2026-01-10 14:35:28.329962 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.04s 2026-01-10 14:35:28.329970 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.85s 2026-01-10 14:35:28.329979 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 1.83s 2026-01-10 14:35:28.329987 | orchestrator | 2026-01-10 14:35:28 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:35:28.329996 | orchestrator | 2026-01-10 14:35:28 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:31.378845 | orchestrator | 2026-01-10 14:35:31 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:35:31.378917 | orchestrator | 2026-01-10 14:35:31 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:35:31.378924 | orchestrator | 2026-01-10 14:35:31 | INFO  | Task ad24724f-bcd1-4cd3-969c-ef2b14c500c6 is in state STARTED 2026-01-10 14:35:31.378928 | orchestrator | 2026-01-10 14:35:31 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:35:31.378932 | orchestrator | 2026-01-10 14:35:31 | INFO  | Task 3036a5c7-a2e7-42a0-bc00-e49be3f10aff is in state STARTED 2026-01-10 14:35:31.378936 | orchestrator | 2026-01-10 14:35:31 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:35:31.378941 | orchestrator | 2026-01-10 14:35:31 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:34.404427 | orchestrator | 2026-01-10 14:35:34 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:35:34.405230 | orchestrator | 2026-01-10 14:35:34 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:35:34.406503 | orchestrator | 2026-01-10 14:35:34 | INFO  | Task ad24724f-bcd1-4cd3-969c-ef2b14c500c6 is in state STARTED 2026-01-10 14:35:34.407836 | orchestrator | 2026-01-10 14:35:34 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:35:34.409433 | orchestrator | 2026-01-10 14:35:34 | INFO  | Task 3036a5c7-a2e7-42a0-bc00-e49be3f10aff is in state STARTED 2026-01-10 14:35:34.411076 | orchestrator | 2026-01-10 14:35:34 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:35:34.411116 | orchestrator | 2026-01-10 14:35:34 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:37.449525 | orchestrator | 2026-01-10 14:35:37 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:35:37.452677 | orchestrator | 2026-01-10 14:35:37 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:35:37.453896 | orchestrator | 2026-01-10 14:35:37 | INFO  | Task ad24724f-bcd1-4cd3-969c-ef2b14c500c6 is in state SUCCESS 2026-01-10 14:35:37.455570 | orchestrator | 2026-01-10 14:35:37 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:35:37.458546 | orchestrator | 2026-01-10 14:35:37 | INFO  | Task 3036a5c7-a2e7-42a0-bc00-e49be3f10aff is in state STARTED 2026-01-10 14:35:37.458591 | orchestrator | 2026-01-10 14:35:37 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:35:37.459106 | orchestrator | 2026-01-10 14:35:37 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:40.506443 | orchestrator | 2026-01-10 14:35:40 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:35:40.508022 | orchestrator | 2026-01-10 14:35:40 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:35:40.510009 | orchestrator | 2026-01-10 14:35:40 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:35:40.511458 | orchestrator | 2026-01-10 14:35:40 | INFO  | Task 3036a5c7-a2e7-42a0-bc00-e49be3f10aff is in state STARTED 2026-01-10 14:35:40.513747 | orchestrator | 2026-01-10 14:35:40 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:35:40.513988 | orchestrator | 2026-01-10 14:35:40 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:43.565428 | orchestrator | 2026-01-10 14:35:43 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:35:43.569593 | orchestrator | 2026-01-10 14:35:43 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:35:43.570579 | orchestrator | 2026-01-10 14:35:43 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:35:43.571427 | orchestrator | 2026-01-10 14:35:43 | INFO  | Task 3036a5c7-a2e7-42a0-bc00-e49be3f10aff is in state SUCCESS 2026-01-10 14:35:43.573024 | orchestrator | 2026-01-10 14:35:43 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:35:43.573081 | orchestrator | 2026-01-10 14:35:43 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:46.615430 | orchestrator | 2026-01-10 14:35:46 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:35:46.616729 | orchestrator | 2026-01-10 14:35:46 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:35:46.616806 | orchestrator | 2026-01-10 14:35:46 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:35:46.617483 | orchestrator | 2026-01-10 14:35:46 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:35:46.617513 | orchestrator | 2026-01-10 14:35:46 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:49.642075 | orchestrator | 2026-01-10 14:35:49 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:35:49.644408 | orchestrator | 2026-01-10 14:35:49 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:35:49.645894 | orchestrator | 2026-01-10 14:35:49 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:35:49.647210 | orchestrator | 2026-01-10 14:35:49 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:35:49.647357 | orchestrator | 2026-01-10 14:35:49 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:52.701662 | orchestrator | 2026-01-10 14:35:52 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:35:52.702131 | orchestrator | 2026-01-10 14:35:52 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:35:52.704187 | orchestrator | 2026-01-10 14:35:52 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:35:52.706761 | orchestrator | 2026-01-10 14:35:52 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:35:52.706852 | orchestrator | 2026-01-10 14:35:52 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:55.741931 | orchestrator | 2026-01-10 14:35:55 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:35:55.743943 | orchestrator | 2026-01-10 14:35:55 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:35:55.747367 | orchestrator | 2026-01-10 14:35:55 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:35:55.748983 | orchestrator | 2026-01-10 14:35:55 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:35:55.749373 | orchestrator | 2026-01-10 14:35:55 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:58.779473 | orchestrator | 2026-01-10 14:35:58 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:35:58.779753 | orchestrator | 2026-01-10 14:35:58 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:35:58.780462 | orchestrator | 2026-01-10 14:35:58 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:35:58.782531 | orchestrator | 2026-01-10 14:35:58 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:35:58.782583 | orchestrator | 2026-01-10 14:35:58 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:01.811770 | orchestrator | 2026-01-10 14:36:01 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:36:01.811908 | orchestrator | 2026-01-10 14:36:01 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:36:01.811920 | orchestrator | 2026-01-10 14:36:01 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:36:01.811929 | orchestrator | 2026-01-10 14:36:01 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:36:01.811937 | orchestrator | 2026-01-10 14:36:01 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:04.838094 | orchestrator | 2026-01-10 14:36:04 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:36:04.838346 | orchestrator | 2026-01-10 14:36:04 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:36:04.839501 | orchestrator | 2026-01-10 14:36:04 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:36:04.840378 | orchestrator | 2026-01-10 14:36:04 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:36:04.840417 | orchestrator | 2026-01-10 14:36:04 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:07.876802 | orchestrator | 2026-01-10 14:36:07 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:36:07.878057 | orchestrator | 2026-01-10 14:36:07 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:36:07.879923 | orchestrator | 2026-01-10 14:36:07 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:36:07.881779 | orchestrator | 2026-01-10 14:36:07 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:36:07.881825 | orchestrator | 2026-01-10 14:36:07 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:10.915723 | orchestrator | 2026-01-10 14:36:10 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:36:10.916781 | orchestrator | 2026-01-10 14:36:10 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:36:10.917933 | orchestrator | 2026-01-10 14:36:10 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:36:10.919384 | orchestrator | 2026-01-10 14:36:10 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:36:10.919421 | orchestrator | 2026-01-10 14:36:10 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:13.944003 | orchestrator | 2026-01-10 14:36:13 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:36:13.944539 | orchestrator | 2026-01-10 14:36:13 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:36:13.946266 | orchestrator | 2026-01-10 14:36:13 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:36:13.948745 | orchestrator | 2026-01-10 14:36:13 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:36:13.948795 | orchestrator | 2026-01-10 14:36:13 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:16.981584 | orchestrator | 2026-01-10 14:36:16 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:36:16.982232 | orchestrator | 2026-01-10 14:36:16 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:36:16.985068 | orchestrator | 2026-01-10 14:36:16 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:36:16.986128 | orchestrator | 2026-01-10 14:36:16 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:36:16.987789 | orchestrator | 2026-01-10 14:36:16 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:20.039309 | orchestrator | 2026-01-10 14:36:20 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:36:20.040188 | orchestrator | 2026-01-10 14:36:20 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:36:20.041045 | orchestrator | 2026-01-10 14:36:20 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:36:20.042504 | orchestrator | 2026-01-10 14:36:20 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:36:20.043357 | orchestrator | 2026-01-10 14:36:20 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:23.098972 | orchestrator | 2026-01-10 14:36:23 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:36:23.099727 | orchestrator | 2026-01-10 14:36:23 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:36:23.103057 | orchestrator | 2026-01-10 14:36:23 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:36:23.104321 | orchestrator | 2026-01-10 14:36:23 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:36:23.104368 | orchestrator | 2026-01-10 14:36:23 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:26.141608 | orchestrator | 2026-01-10 14:36:26 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state STARTED 2026-01-10 14:36:26.141962 | orchestrator | 2026-01-10 14:36:26 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:36:26.143179 | orchestrator | 2026-01-10 14:36:26 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:36:26.145614 | orchestrator | 2026-01-10 14:36:26 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:36:26.145711 | orchestrator | 2026-01-10 14:36:26 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:29.193078 | orchestrator | 2026-01-10 14:36:29 | INFO  | Task e49c1d62-0376-427a-8e08-be74ad4fd0d4 is in state SUCCESS 2026-01-10 14:36:29.193456 | orchestrator | 2026-01-10 14:36:29.193541 | orchestrator | 2026-01-10 14:36:29.193552 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-01-10 14:36:29.193559 | orchestrator | 2026-01-10 14:36:29.193565 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-10 14:36:29.193571 | orchestrator | Saturday 10 January 2026 14:35:33 +0000 (0:00:00.168) 0:00:00.168 ****** 2026-01-10 14:36:29.193577 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-10 14:36:29.193582 | orchestrator | 2026-01-10 14:36:29.193587 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-10 14:36:29.193593 | orchestrator | Saturday 10 January 2026 14:35:34 +0000 (0:00:00.922) 0:00:01.091 ****** 2026-01-10 14:36:29.193598 | orchestrator | changed: [testbed-manager] 2026-01-10 14:36:29.193604 | orchestrator | 2026-01-10 14:36:29.193609 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-01-10 14:36:29.193614 | orchestrator | Saturday 10 January 2026 14:35:36 +0000 (0:00:02.068) 0:00:03.159 ****** 2026-01-10 14:36:29.193619 | orchestrator | changed: [testbed-manager] 2026-01-10 14:36:29.193625 | orchestrator | 2026-01-10 14:36:29.193630 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:36:29.193635 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:36:29.193642 | orchestrator | 2026-01-10 14:36:29.193647 | orchestrator | 2026-01-10 14:36:29.193652 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:36:29.193658 | orchestrator | Saturday 10 January 2026 14:35:36 +0000 (0:00:00.477) 0:00:03.636 ****** 2026-01-10 14:36:29.193663 | orchestrator | =============================================================================== 2026-01-10 14:36:29.193668 | orchestrator | Write kubeconfig file --------------------------------------------------- 2.07s 2026-01-10 14:36:29.193673 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.92s 2026-01-10 14:36:29.193679 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.48s 2026-01-10 14:36:29.193688 | orchestrator | 2026-01-10 14:36:29.193696 | orchestrator | 2026-01-10 14:36:29.193706 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-10 14:36:29.193733 | orchestrator | 2026-01-10 14:36:29.193743 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-10 14:36:29.193753 | orchestrator | Saturday 10 January 2026 14:35:32 +0000 (0:00:00.234) 0:00:00.234 ****** 2026-01-10 14:36:29.193761 | orchestrator | ok: [testbed-manager] 2026-01-10 14:36:29.193771 | orchestrator | 2026-01-10 14:36:29.193790 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-10 14:36:29.193833 | orchestrator | Saturday 10 January 2026 14:35:33 +0000 (0:00:00.806) 0:00:01.040 ****** 2026-01-10 14:36:29.193844 | orchestrator | ok: [testbed-manager] 2026-01-10 14:36:29.193854 | orchestrator | 2026-01-10 14:36:29.193864 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-10 14:36:29.193873 | orchestrator | Saturday 10 January 2026 14:35:34 +0000 (0:00:00.555) 0:00:01.596 ****** 2026-01-10 14:36:29.193882 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-10 14:36:29.193891 | orchestrator | 2026-01-10 14:36:29.193900 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-10 14:36:29.193909 | orchestrator | Saturday 10 January 2026 14:35:35 +0000 (0:00:00.715) 0:00:02.312 ****** 2026-01-10 14:36:29.193919 | orchestrator | changed: [testbed-manager] 2026-01-10 14:36:29.193928 | orchestrator | 2026-01-10 14:36:29.193933 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-10 14:36:29.194149 | orchestrator | Saturday 10 January 2026 14:35:37 +0000 (0:00:02.439) 0:00:04.752 ****** 2026-01-10 14:36:29.194157 | orchestrator | changed: [testbed-manager] 2026-01-10 14:36:29.194163 | orchestrator | 2026-01-10 14:36:29.194171 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-10 14:36:29.194180 | orchestrator | Saturday 10 January 2026 14:35:38 +0000 (0:00:00.549) 0:00:05.301 ****** 2026-01-10 14:36:29.194194 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-10 14:36:29.194203 | orchestrator | 2026-01-10 14:36:29.194213 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-10 14:36:29.194222 | orchestrator | Saturday 10 January 2026 14:35:39 +0000 (0:00:01.632) 0:00:06.933 ****** 2026-01-10 14:36:29.194231 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-10 14:36:29.194240 | orchestrator | 2026-01-10 14:36:29.194250 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-10 14:36:29.194259 | orchestrator | Saturday 10 January 2026 14:35:40 +0000 (0:00:00.826) 0:00:07.760 ****** 2026-01-10 14:36:29.194267 | orchestrator | ok: [testbed-manager] 2026-01-10 14:36:29.194276 | orchestrator | 2026-01-10 14:36:29.194284 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-10 14:36:29.194293 | orchestrator | Saturday 10 January 2026 14:35:40 +0000 (0:00:00.421) 0:00:08.181 ****** 2026-01-10 14:36:29.194302 | orchestrator | ok: [testbed-manager] 2026-01-10 14:36:29.194311 | orchestrator | 2026-01-10 14:36:29.194320 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:36:29.194330 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:36:29.194340 | orchestrator | 2026-01-10 14:36:29.194348 | orchestrator | 2026-01-10 14:36:29.194358 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:36:29.194367 | orchestrator | Saturday 10 January 2026 14:35:41 +0000 (0:00:00.340) 0:00:08.522 ****** 2026-01-10 14:36:29.194376 | orchestrator | =============================================================================== 2026-01-10 14:36:29.194381 | orchestrator | Write kubeconfig file --------------------------------------------------- 2.44s 2026-01-10 14:36:29.194387 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.63s 2026-01-10 14:36:29.194392 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.83s 2026-01-10 14:36:29.194409 | orchestrator | Get home directory of operator user ------------------------------------- 0.81s 2026-01-10 14:36:29.194424 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.72s 2026-01-10 14:36:29.194429 | orchestrator | Create .kube directory -------------------------------------------------- 0.56s 2026-01-10 14:36:29.194434 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.55s 2026-01-10 14:36:29.194440 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.42s 2026-01-10 14:36:29.194445 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.34s 2026-01-10 14:36:29.194450 | orchestrator | 2026-01-10 14:36:29.194455 | orchestrator | 2026-01-10 14:36:29.194460 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-01-10 14:36:29.194466 | orchestrator | 2026-01-10 14:36:29.194471 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-01-10 14:36:29.194476 | orchestrator | Saturday 10 January 2026 14:34:01 +0000 (0:00:00.289) 0:00:00.289 ****** 2026-01-10 14:36:29.194481 | orchestrator | ok: [localhost] => { 2026-01-10 14:36:29.194488 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-01-10 14:36:29.194497 | orchestrator | } 2026-01-10 14:36:29.194506 | orchestrator | 2026-01-10 14:36:29.194515 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-01-10 14:36:29.194523 | orchestrator | Saturday 10 January 2026 14:34:02 +0000 (0:00:00.195) 0:00:00.485 ****** 2026-01-10 14:36:29.194534 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-01-10 14:36:29.194544 | orchestrator | ...ignoring 2026-01-10 14:36:29.194554 | orchestrator | 2026-01-10 14:36:29.194563 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-01-10 14:36:29.194571 | orchestrator | Saturday 10 January 2026 14:34:05 +0000 (0:00:02.944) 0:00:03.430 ****** 2026-01-10 14:36:29.194579 | orchestrator | skipping: [localhost] 2026-01-10 14:36:29.194588 | orchestrator | 2026-01-10 14:36:29.194597 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-01-10 14:36:29.194605 | orchestrator | Saturday 10 January 2026 14:34:05 +0000 (0:00:00.068) 0:00:03.498 ****** 2026-01-10 14:36:29.194615 | orchestrator | ok: [localhost] 2026-01-10 14:36:29.194624 | orchestrator | 2026-01-10 14:36:29.194632 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:36:29.194642 | orchestrator | 2026-01-10 14:36:29.194651 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:36:29.194668 | orchestrator | Saturday 10 January 2026 14:34:05 +0000 (0:00:00.518) 0:00:04.017 ****** 2026-01-10 14:36:29.194675 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:36:29.194684 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:36:29.194692 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:36:29.194700 | orchestrator | 2026-01-10 14:36:29.194709 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:36:29.194718 | orchestrator | Saturday 10 January 2026 14:34:05 +0000 (0:00:00.310) 0:00:04.327 ****** 2026-01-10 14:36:29.194727 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-01-10 14:36:29.194735 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-01-10 14:36:29.194741 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-01-10 14:36:29.194746 | orchestrator | 2026-01-10 14:36:29.194751 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-01-10 14:36:29.194756 | orchestrator | 2026-01-10 14:36:29.194762 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-10 14:36:29.194769 | orchestrator | Saturday 10 January 2026 14:34:06 +0000 (0:00:00.675) 0:00:05.003 ****** 2026-01-10 14:36:29.194775 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:36:29.194781 | orchestrator | 2026-01-10 14:36:29.194786 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-10 14:36:29.194884 | orchestrator | Saturday 10 January 2026 14:34:08 +0000 (0:00:01.812) 0:00:06.816 ****** 2026-01-10 14:36:29.194900 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:36:29.194918 | orchestrator | 2026-01-10 14:36:29.194929 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-01-10 14:36:29.194939 | orchestrator | Saturday 10 January 2026 14:34:10 +0000 (0:00:02.113) 0:00:08.929 ****** 2026-01-10 14:36:29.194948 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:36:29.194959 | orchestrator | 2026-01-10 14:36:29.194969 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-01-10 14:36:29.194978 | orchestrator | Saturday 10 January 2026 14:34:10 +0000 (0:00:00.397) 0:00:09.327 ****** 2026-01-10 14:36:29.194988 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:36:29.194997 | orchestrator | 2026-01-10 14:36:29.195004 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-01-10 14:36:29.195010 | orchestrator | Saturday 10 January 2026 14:34:11 +0000 (0:00:00.478) 0:00:09.805 ****** 2026-01-10 14:36:29.195016 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:36:29.195022 | orchestrator | 2026-01-10 14:36:29.195027 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-01-10 14:36:29.195033 | orchestrator | Saturday 10 January 2026 14:34:11 +0000 (0:00:00.453) 0:00:10.259 ****** 2026-01-10 14:36:29.195039 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:36:29.195045 | orchestrator | 2026-01-10 14:36:29.195051 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-10 14:36:29.195056 | orchestrator | Saturday 10 January 2026 14:34:13 +0000 (0:00:01.213) 0:00:11.473 ****** 2026-01-10 14:36:29.195062 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:36:29.195069 | orchestrator | 2026-01-10 14:36:29.195077 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-10 14:36:29.195102 | orchestrator | Saturday 10 January 2026 14:34:13 +0000 (0:00:00.719) 0:00:12.192 ****** 2026-01-10 14:36:29.195113 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:36:29.195122 | orchestrator | 2026-01-10 14:36:29.195130 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-01-10 14:36:29.195139 | orchestrator | Saturday 10 January 2026 14:34:14 +0000 (0:00:00.841) 0:00:13.034 ****** 2026-01-10 14:36:29.195149 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:36:29.195158 | orchestrator | 2026-01-10 14:36:29.195166 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-01-10 14:36:29.195174 | orchestrator | Saturday 10 January 2026 14:34:15 +0000 (0:00:00.476) 0:00:13.510 ****** 2026-01-10 14:36:29.195183 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:36:29.195192 | orchestrator | 2026-01-10 14:36:29.195200 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-01-10 14:36:29.195209 | orchestrator | Saturday 10 January 2026 14:34:15 +0000 (0:00:00.817) 0:00:14.328 ****** 2026-01-10 14:36:29.195221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-10 14:36:29.195249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-10 14:36:29.195260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-10 14:36:29.195267 | orchestrator | 2026-01-10 14:36:29.195272 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-01-10 14:36:29.195278 | orchestrator | Saturday 10 January 2026 14:34:17 +0000 (0:00:01.303) 0:00:15.631 ****** 2026-01-10 14:36:29.195289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-10 14:36:29.195299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-10 14:36:29.195309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-10 14:36:29.195315 | orchestrator | 2026-01-10 14:36:29.195320 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-01-10 14:36:29.195326 | orchestrator | Saturday 10 January 2026 14:34:20 +0000 (0:00:02.758) 0:00:18.390 ****** 2026-01-10 14:36:29.195331 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-10 14:36:29.195336 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-10 14:36:29.195342 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-10 14:36:29.195347 | orchestrator | 2026-01-10 14:36:29.195352 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-01-10 14:36:29.195357 | orchestrator | Saturday 10 January 2026 14:34:22 +0000 (0:00:02.550) 0:00:20.940 ****** 2026-01-10 14:36:29.195362 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-10 14:36:29.195368 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-10 14:36:29.195373 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-10 14:36:29.195378 | orchestrator | 2026-01-10 14:36:29.195386 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-01-10 14:36:29.195392 | orchestrator | Saturday 10 January 2026 14:34:25 +0000 (0:00:02.539) 0:00:23.480 ****** 2026-01-10 14:36:29.195397 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-10 14:36:29.195402 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-10 14:36:29.195407 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-10 14:36:29.195413 | orchestrator | 2026-01-10 14:36:29.195418 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-01-10 14:36:29.195423 | orchestrator | Saturday 10 January 2026 14:34:26 +0000 (0:00:01.557) 0:00:25.037 ****** 2026-01-10 14:36:29.195429 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-10 14:36:29.195434 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-10 14:36:29.195443 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-10 14:36:29.195448 | orchestrator | 2026-01-10 14:36:29.195453 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-01-10 14:36:29.195459 | orchestrator | Saturday 10 January 2026 14:34:28 +0000 (0:00:01.825) 0:00:26.862 ****** 2026-01-10 14:36:29.195464 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-10 14:36:29.195469 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-10 14:36:29.195475 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-10 14:36:29.195480 | orchestrator | 2026-01-10 14:36:29.195485 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-01-10 14:36:29.195490 | orchestrator | Saturday 10 January 2026 14:34:30 +0000 (0:00:01.674) 0:00:28.536 ****** 2026-01-10 14:36:29.195495 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-10 14:36:29.195501 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-10 14:36:29.195506 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-10 14:36:29.195511 | orchestrator | 2026-01-10 14:36:29.195518 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-10 14:36:29.195523 | orchestrator | Saturday 10 January 2026 14:34:32 +0000 (0:00:02.012) 0:00:30.549 ****** 2026-01-10 14:36:29.195529 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:36:29.195534 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:36:29.195539 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:36:29.195544 | orchestrator | 2026-01-10 14:36:29.195549 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-01-10 14:36:29.195554 | orchestrator | Saturday 10 January 2026 14:34:33 +0000 (0:00:00.853) 0:00:31.402 ****** 2026-01-10 14:36:29.195560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-10 14:36:29.195571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-10 14:36:29.195581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-10 14:36:29.195587 | orchestrator | 2026-01-10 14:36:29.195592 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-01-10 14:36:29.195597 | orchestrator | Saturday 10 January 2026 14:34:35 +0000 (0:00:02.598) 0:00:34.001 ****** 2026-01-10 14:36:29.195602 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:36:29.195607 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:36:29.195613 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:36:29.195618 | orchestrator | 2026-01-10 14:36:29.195623 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-01-10 14:36:29.195628 | orchestrator | Saturday 10 January 2026 14:34:36 +0000 (0:00:00.872) 0:00:34.873 ****** 2026-01-10 14:36:29.195636 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:36:29.195642 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:36:29.195647 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:36:29.195652 | orchestrator | 2026-01-10 14:36:29.195658 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-01-10 14:36:29.195663 | orchestrator | Saturday 10 January 2026 14:34:43 +0000 (0:00:06.544) 0:00:41.418 ****** 2026-01-10 14:36:29.195668 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:36:29.195673 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:36:29.195679 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:36:29.195684 | orchestrator | 2026-01-10 14:36:29.195689 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-10 14:36:29.195694 | orchestrator | 2026-01-10 14:36:29.195700 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-10 14:36:29.195705 | orchestrator | Saturday 10 January 2026 14:34:43 +0000 (0:00:00.386) 0:00:41.805 ****** 2026-01-10 14:36:29.195710 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:36:29.195715 | orchestrator | 2026-01-10 14:36:29.195720 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-10 14:36:29.195725 | orchestrator | Saturday 10 January 2026 14:34:44 +0000 (0:00:00.825) 0:00:42.631 ****** 2026-01-10 14:36:29.195730 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:36:29.195736 | orchestrator | 2026-01-10 14:36:29.195741 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-10 14:36:29.195746 | orchestrator | Saturday 10 January 2026 14:34:44 +0000 (0:00:00.367) 0:00:42.998 ****** 2026-01-10 14:36:29.195751 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:36:29.195757 | orchestrator | 2026-01-10 14:36:29.195762 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-10 14:36:29.195770 | orchestrator | Saturday 10 January 2026 14:34:46 +0000 (0:00:01.726) 0:00:44.725 ****** 2026-01-10 14:36:29.195779 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:36:29.195832 | orchestrator | 2026-01-10 14:36:29.195842 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-10 14:36:29.195847 | orchestrator | 2026-01-10 14:36:29.195852 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-10 14:36:29.195858 | orchestrator | Saturday 10 January 2026 14:35:45 +0000 (0:00:59.387) 0:01:44.112 ****** 2026-01-10 14:36:29.195863 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:36:29.195868 | orchestrator | 2026-01-10 14:36:29.195873 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-10 14:36:29.195878 | orchestrator | Saturday 10 January 2026 14:35:46 +0000 (0:00:00.561) 0:01:44.674 ****** 2026-01-10 14:36:29.195883 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:36:29.195888 | orchestrator | 2026-01-10 14:36:29.195894 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-10 14:36:29.195899 | orchestrator | Saturday 10 January 2026 14:35:46 +0000 (0:00:00.201) 0:01:44.875 ****** 2026-01-10 14:36:29.195904 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:36:29.195909 | orchestrator | 2026-01-10 14:36:29.195914 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-10 14:36:29.195919 | orchestrator | Saturday 10 January 2026 14:35:48 +0000 (0:00:02.157) 0:01:47.033 ****** 2026-01-10 14:36:29.195924 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:36:29.195929 | orchestrator | 2026-01-10 14:36:29.195938 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-10 14:36:29.195946 | orchestrator | 2026-01-10 14:36:29.195956 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-10 14:36:29.195970 | orchestrator | Saturday 10 January 2026 14:36:04 +0000 (0:00:16.292) 0:02:03.326 ****** 2026-01-10 14:36:29.195980 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:36:29.196098 | orchestrator | 2026-01-10 14:36:29.196105 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-10 14:36:29.196110 | orchestrator | Saturday 10 January 2026 14:36:05 +0000 (0:00:00.665) 0:02:03.992 ****** 2026-01-10 14:36:29.196116 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:36:29.196121 | orchestrator | 2026-01-10 14:36:29.196127 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-10 14:36:29.196132 | orchestrator | Saturday 10 January 2026 14:36:05 +0000 (0:00:00.246) 0:02:04.238 ****** 2026-01-10 14:36:29.196140 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:36:29.196152 | orchestrator | 2026-01-10 14:36:29.196164 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-10 14:36:29.196173 | orchestrator | Saturday 10 January 2026 14:36:12 +0000 (0:00:06.720) 0:02:10.959 ****** 2026-01-10 14:36:29.196182 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:36:29.196190 | orchestrator | 2026-01-10 14:36:29.196199 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-01-10 14:36:29.196206 | orchestrator | 2026-01-10 14:36:29.196215 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-01-10 14:36:29.196223 | orchestrator | Saturday 10 January 2026 14:36:24 +0000 (0:00:11.663) 0:02:22.622 ****** 2026-01-10 14:36:29.196233 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:36:29.196242 | orchestrator | 2026-01-10 14:36:29.196250 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-01-10 14:36:29.196259 | orchestrator | Saturday 10 January 2026 14:36:24 +0000 (0:00:00.537) 0:02:23.159 ****** 2026-01-10 14:36:29.196266 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-10 14:36:29.196275 | orchestrator | enable_outward_rabbitmq_True 2026-01-10 14:36:29.196284 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-10 14:36:29.196292 | orchestrator | outward_rabbitmq_restart 2026-01-10 14:36:29.196302 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:36:29.196311 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:36:29.196320 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:36:29.196328 | orchestrator | 2026-01-10 14:36:29.196350 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-01-10 14:36:29.196359 | orchestrator | skipping: no hosts matched 2026-01-10 14:36:29.196367 | orchestrator | 2026-01-10 14:36:29.196376 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-01-10 14:36:29.196385 | orchestrator | skipping: no hosts matched 2026-01-10 14:36:29.196393 | orchestrator | 2026-01-10 14:36:29.196408 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-01-10 14:36:29.196418 | orchestrator | skipping: no hosts matched 2026-01-10 14:36:29.196427 | orchestrator | 2026-01-10 14:36:29.196436 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:36:29.196444 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-01-10 14:36:29.196453 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-10 14:36:29.196461 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:36:29.196470 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:36:29.196479 | orchestrator | 2026-01-10 14:36:29.196487 | orchestrator | 2026-01-10 14:36:29.196496 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:36:29.196505 | orchestrator | Saturday 10 January 2026 14:36:27 +0000 (0:00:02.928) 0:02:26.088 ****** 2026-01-10 14:36:29.196513 | orchestrator | =============================================================================== 2026-01-10 14:36:29.196521 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 87.34s 2026-01-10 14:36:29.196530 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.61s 2026-01-10 14:36:29.196539 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.54s 2026-01-10 14:36:29.196547 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.94s 2026-01-10 14:36:29.196556 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.93s 2026-01-10 14:36:29.196564 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.76s 2026-01-10 14:36:29.196572 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.60s 2026-01-10 14:36:29.196580 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.55s 2026-01-10 14:36:29.196588 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.54s 2026-01-10 14:36:29.196597 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.11s 2026-01-10 14:36:29.196605 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.05s 2026-01-10 14:36:29.196613 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.01s 2026-01-10 14:36:29.196621 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.83s 2026-01-10 14:36:29.196629 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.81s 2026-01-10 14:36:29.196638 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.67s 2026-01-10 14:36:29.196659 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.56s 2026-01-10 14:36:29.196669 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.30s 2026-01-10 14:36:29.196677 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 1.21s 2026-01-10 14:36:29.196686 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.87s 2026-01-10 14:36:29.196694 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.85s 2026-01-10 14:36:29.196703 | orchestrator | 2026-01-10 14:36:29 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:36:29.198711 | orchestrator | 2026-01-10 14:36:29 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:36:29.200693 | orchestrator | 2026-01-10 14:36:29 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:36:29.201278 | orchestrator | 2026-01-10 14:36:29 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:32.236481 | orchestrator | 2026-01-10 14:36:32 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:36:32.240907 | orchestrator | 2026-01-10 14:36:32 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:36:32.243986 | orchestrator | 2026-01-10 14:36:32 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:36:32.244283 | orchestrator | 2026-01-10 14:36:32 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:35.274193 | orchestrator | 2026-01-10 14:36:35 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:36:35.274500 | orchestrator | 2026-01-10 14:36:35 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:36:35.275859 | orchestrator | 2026-01-10 14:36:35 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:36:35.275902 | orchestrator | 2026-01-10 14:36:35 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:38.312562 | orchestrator | 2026-01-10 14:36:38 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:36:38.315875 | orchestrator | 2026-01-10 14:36:38 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:36:38.318208 | orchestrator | 2026-01-10 14:36:38 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:36:38.318313 | orchestrator | 2026-01-10 14:36:38 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:41.356356 | orchestrator | 2026-01-10 14:36:41 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:36:41.357663 | orchestrator | 2026-01-10 14:36:41 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:36:41.359045 | orchestrator | 2026-01-10 14:36:41 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:36:41.359445 | orchestrator | 2026-01-10 14:36:41 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:44.395423 | orchestrator | 2026-01-10 14:36:44 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:36:44.396232 | orchestrator | 2026-01-10 14:36:44 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:36:44.397217 | orchestrator | 2026-01-10 14:36:44 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:36:44.397481 | orchestrator | 2026-01-10 14:36:44 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:47.441675 | orchestrator | 2026-01-10 14:36:47 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:36:47.449628 | orchestrator | 2026-01-10 14:36:47 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:36:47.449685 | orchestrator | 2026-01-10 14:36:47 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:36:47.449693 | orchestrator | 2026-01-10 14:36:47 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:50.481388 | orchestrator | 2026-01-10 14:36:50 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:36:50.482299 | orchestrator | 2026-01-10 14:36:50 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:36:50.483007 | orchestrator | 2026-01-10 14:36:50 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:36:50.483145 | orchestrator | 2026-01-10 14:36:50 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:53.529898 | orchestrator | 2026-01-10 14:36:53 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:36:53.531963 | orchestrator | 2026-01-10 14:36:53 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:36:53.534839 | orchestrator | 2026-01-10 14:36:53 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:36:53.535015 | orchestrator | 2026-01-10 14:36:53 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:56.584347 | orchestrator | 2026-01-10 14:36:56 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:36:56.586359 | orchestrator | 2026-01-10 14:36:56 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:36:56.588012 | orchestrator | 2026-01-10 14:36:56 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:36:56.588071 | orchestrator | 2026-01-10 14:36:56 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:59.640898 | orchestrator | 2026-01-10 14:36:59 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:36:59.642371 | orchestrator | 2026-01-10 14:36:59 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:36:59.644620 | orchestrator | 2026-01-10 14:36:59 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:36:59.644679 | orchestrator | 2026-01-10 14:36:59 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:02.684059 | orchestrator | 2026-01-10 14:37:02 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:37:02.686570 | orchestrator | 2026-01-10 14:37:02 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:37:02.688058 | orchestrator | 2026-01-10 14:37:02 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:37:02.688090 | orchestrator | 2026-01-10 14:37:02 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:05.737362 | orchestrator | 2026-01-10 14:37:05 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:37:05.739132 | orchestrator | 2026-01-10 14:37:05 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:37:05.741483 | orchestrator | 2026-01-10 14:37:05 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:37:05.741546 | orchestrator | 2026-01-10 14:37:05 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:08.782628 | orchestrator | 2026-01-10 14:37:08 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:37:08.782700 | orchestrator | 2026-01-10 14:37:08 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:37:08.783405 | orchestrator | 2026-01-10 14:37:08 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:37:08.783437 | orchestrator | 2026-01-10 14:37:08 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:11.810437 | orchestrator | 2026-01-10 14:37:11 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state STARTED 2026-01-10 14:37:11.810634 | orchestrator | 2026-01-10 14:37:11 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:37:11.812233 | orchestrator | 2026-01-10 14:37:11 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:37:11.812291 | orchestrator | 2026-01-10 14:37:11 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:14.840846 | orchestrator | 2026-01-10 14:37:14.840950 | orchestrator | 2026-01-10 14:37:14.840961 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:37:14.840969 | orchestrator | 2026-01-10 14:37:14.840976 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:37:14.840983 | orchestrator | Saturday 10 January 2026 14:34:46 +0000 (0:00:00.374) 0:00:00.374 ****** 2026-01-10 14:37:14.840990 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:37:14.840998 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:37:14.841004 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:37:14.841010 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:37:14.841017 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:37:14.841023 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:37:14.841029 | orchestrator | 2026-01-10 14:37:14.841036 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:37:14.841043 | orchestrator | Saturday 10 January 2026 14:34:47 +0000 (0:00:00.941) 0:00:01.316 ****** 2026-01-10 14:37:14.841050 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-01-10 14:37:14.841056 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-01-10 14:37:14.841063 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-01-10 14:37:14.841070 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-01-10 14:37:14.841076 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-01-10 14:37:14.841082 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-01-10 14:37:14.841088 | orchestrator | 2026-01-10 14:37:14.841094 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-01-10 14:37:14.841100 | orchestrator | 2026-01-10 14:37:14.841106 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-01-10 14:37:14.841112 | orchestrator | Saturday 10 January 2026 14:34:48 +0000 (0:00:00.886) 0:00:02.203 ****** 2026-01-10 14:37:14.841119 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:37:14.841128 | orchestrator | 2026-01-10 14:37:14.841134 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-01-10 14:37:14.841140 | orchestrator | Saturday 10 January 2026 14:34:49 +0000 (0:00:01.183) 0:00:03.386 ****** 2026-01-10 14:37:14.841149 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.841159 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.841176 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.841204 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.841211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.841235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.841243 | orchestrator | 2026-01-10 14:37:14.841250 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-01-10 14:37:14.841257 | orchestrator | Saturday 10 January 2026 14:34:51 +0000 (0:00:01.514) 0:00:04.901 ****** 2026-01-10 14:37:14.841264 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.841271 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.841277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.841285 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.841292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.841309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.841318 | orchestrator | 2026-01-10 14:37:14.841326 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-01-10 14:37:14.841334 | orchestrator | Saturday 10 January 2026 14:34:53 +0000 (0:00:02.700) 0:00:07.601 ****** 2026-01-10 14:37:14.841341 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.841347 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.841363 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.841370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.841376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.841383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.841389 | orchestrator | 2026-01-10 14:37:14.841395 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-01-10 14:37:14.841401 | orchestrator | Saturday 10 January 2026 14:34:55 +0000 (0:00:01.675) 0:00:09.277 ****** 2026-01-10 14:37:14.841408 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.841422 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.841434 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.841443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.841450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.841462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.841470 | orchestrator | 2026-01-10 14:37:14.841478 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-01-10 14:37:14.841485 | orchestrator | Saturday 10 January 2026 14:34:57 +0000 (0:00:02.362) 0:00:11.639 ****** 2026-01-10 14:37:14.841493 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.841501 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.841509 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.841521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.841528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.841540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.841557 | orchestrator | 2026-01-10 14:37:14.841565 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-01-10 14:37:14.841571 | orchestrator | Saturday 10 January 2026 14:34:59 +0000 (0:00:02.207) 0:00:13.847 ****** 2026-01-10 14:37:14.841579 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:37:14.841587 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:37:14.841594 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:37:14.841601 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:37:14.841608 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:37:14.841616 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:37:14.841623 | orchestrator | 2026-01-10 14:37:14.841630 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-01-10 14:37:14.841638 | orchestrator | Saturday 10 January 2026 14:35:03 +0000 (0:00:03.089) 0:00:16.936 ****** 2026-01-10 14:37:14.841645 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-01-10 14:37:14.841652 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-01-10 14:37:14.841660 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-01-10 14:37:14.841670 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-01-10 14:37:14.841678 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-01-10 14:37:14.841684 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-01-10 14:37:14.841692 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-10 14:37:14.841699 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-10 14:37:14.841706 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-10 14:37:14.841713 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-10 14:37:14.841720 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-10 14:37:14.841726 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-10 14:37:14.841734 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-10 14:37:14.841747 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-10 14:37:14.841781 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-10 14:37:14.841788 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-10 14:37:14.841794 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-10 14:37:14.841801 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-10 14:37:14.841807 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-10 14:37:14.841815 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-10 14:37:14.841822 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-10 14:37:14.841829 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-10 14:37:14.841836 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-10 14:37:14.841842 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-10 14:37:14.841849 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-10 14:37:14.841856 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-10 14:37:14.841863 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-10 14:37:14.841869 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-10 14:37:14.841880 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-10 14:37:14.841886 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-10 14:37:14.841892 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-10 14:37:14.841899 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-10 14:37:14.841905 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-10 14:37:14.841912 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-10 14:37:14.841918 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-10 14:37:14.841924 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-10 14:37:14.841931 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-10 14:37:14.841937 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-10 14:37:14.841944 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-10 14:37:14.841950 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-10 14:37:14.841963 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-01-10 14:37:14.841970 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-10 14:37:14.841983 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-10 14:37:14.841990 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-01-10 14:37:14.841997 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-01-10 14:37:14.842005 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-10 14:37:14.842078 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-01-10 14:37:14.842088 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-01-10 14:37:14.842095 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-10 14:37:14.842101 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-01-10 14:37:14.842108 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-10 14:37:14.842116 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-10 14:37:14.842123 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-10 14:37:14.842130 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-10 14:37:14.842137 | orchestrator | 2026-01-10 14:37:14.842143 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-10 14:37:14.842149 | orchestrator | Saturday 10 January 2026 14:35:25 +0000 (0:00:22.198) 0:00:39.135 ****** 2026-01-10 14:37:14.842156 | orchestrator | 2026-01-10 14:37:14.842163 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-10 14:37:14.842169 | orchestrator | Saturday 10 January 2026 14:35:25 +0000 (0:00:00.135) 0:00:39.271 ****** 2026-01-10 14:37:14.842176 | orchestrator | 2026-01-10 14:37:14.842182 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-10 14:37:14.842189 | orchestrator | Saturday 10 January 2026 14:35:25 +0000 (0:00:00.137) 0:00:39.408 ****** 2026-01-10 14:37:14.842196 | orchestrator | 2026-01-10 14:37:14.842203 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-10 14:37:14.842210 | orchestrator | Saturday 10 January 2026 14:35:25 +0000 (0:00:00.110) 0:00:39.519 ****** 2026-01-10 14:37:14.842216 | orchestrator | 2026-01-10 14:37:14.842222 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-10 14:37:14.842229 | orchestrator | Saturday 10 January 2026 14:35:25 +0000 (0:00:00.059) 0:00:39.578 ****** 2026-01-10 14:37:14.842236 | orchestrator | 2026-01-10 14:37:14.842243 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-10 14:37:14.842250 | orchestrator | Saturday 10 January 2026 14:35:25 +0000 (0:00:00.059) 0:00:39.637 ****** 2026-01-10 14:37:14.842256 | orchestrator | 2026-01-10 14:37:14.842263 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-01-10 14:37:14.842275 | orchestrator | Saturday 10 January 2026 14:35:25 +0000 (0:00:00.059) 0:00:39.697 ****** 2026-01-10 14:37:14.842281 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:37:14.842288 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:37:14.842295 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:37:14.842301 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:37:14.842308 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:37:14.842315 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:37:14.842327 | orchestrator | 2026-01-10 14:37:14.842334 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-01-10 14:37:14.842341 | orchestrator | Saturday 10 January 2026 14:35:27 +0000 (0:00:02.089) 0:00:41.786 ****** 2026-01-10 14:37:14.842347 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:37:14.842353 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:37:14.842360 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:37:14.842366 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:37:14.842372 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:37:14.842378 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:37:14.842383 | orchestrator | 2026-01-10 14:37:14.842390 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-01-10 14:37:14.842397 | orchestrator | 2026-01-10 14:37:14.842403 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-10 14:37:14.842410 | orchestrator | Saturday 10 January 2026 14:35:57 +0000 (0:00:29.618) 0:01:11.404 ****** 2026-01-10 14:37:14.842417 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:37:14.842423 | orchestrator | 2026-01-10 14:37:14.842429 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-10 14:37:14.842435 | orchestrator | Saturday 10 January 2026 14:35:58 +0000 (0:00:00.667) 0:01:12.072 ****** 2026-01-10 14:37:14.842442 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:37:14.842448 | orchestrator | 2026-01-10 14:37:14.842460 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-01-10 14:37:14.842467 | orchestrator | Saturday 10 January 2026 14:35:58 +0000 (0:00:00.551) 0:01:12.623 ****** 2026-01-10 14:37:14.842474 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:37:14.842481 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:37:14.842488 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:37:14.842494 | orchestrator | 2026-01-10 14:37:14.842501 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-01-10 14:37:14.842507 | orchestrator | Saturday 10 January 2026 14:35:59 +0000 (0:00:01.023) 0:01:13.647 ****** 2026-01-10 14:37:14.842513 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:37:14.842520 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:37:14.842527 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:37:14.842534 | orchestrator | 2026-01-10 14:37:14.842541 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-01-10 14:37:14.842547 | orchestrator | Saturday 10 January 2026 14:36:00 +0000 (0:00:00.345) 0:01:13.992 ****** 2026-01-10 14:37:14.842554 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:37:14.842560 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:37:14.842567 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:37:14.842574 | orchestrator | 2026-01-10 14:37:14.842580 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-01-10 14:37:14.842586 | orchestrator | Saturday 10 January 2026 14:36:00 +0000 (0:00:00.395) 0:01:14.388 ****** 2026-01-10 14:37:14.842593 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:37:14.842600 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:37:14.842607 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:37:14.842613 | orchestrator | 2026-01-10 14:37:14.842620 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-01-10 14:37:14.842627 | orchestrator | Saturday 10 January 2026 14:36:00 +0000 (0:00:00.371) 0:01:14.760 ****** 2026-01-10 14:37:14.842633 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:37:14.842639 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:37:14.842645 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:37:14.842652 | orchestrator | 2026-01-10 14:37:14.842658 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-01-10 14:37:14.842665 | orchestrator | Saturday 10 January 2026 14:36:01 +0000 (0:00:00.511) 0:01:15.271 ****** 2026-01-10 14:37:14.842671 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:37:14.842684 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:14.842692 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:14.842699 | orchestrator | 2026-01-10 14:37:14.842705 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-01-10 14:37:14.842712 | orchestrator | Saturday 10 January 2026 14:36:01 +0000 (0:00:00.350) 0:01:15.621 ****** 2026-01-10 14:37:14.842719 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:37:14.842726 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:14.842732 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:14.842739 | orchestrator | 2026-01-10 14:37:14.842746 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-01-10 14:37:14.842767 | orchestrator | Saturday 10 January 2026 14:36:02 +0000 (0:00:00.430) 0:01:16.052 ****** 2026-01-10 14:37:14.842774 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:37:14.842780 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:14.842786 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:14.842792 | orchestrator | 2026-01-10 14:37:14.842798 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-01-10 14:37:14.842805 | orchestrator | Saturday 10 January 2026 14:36:02 +0000 (0:00:00.337) 0:01:16.390 ****** 2026-01-10 14:37:14.842811 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:37:14.842817 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:14.842824 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:14.842831 | orchestrator | 2026-01-10 14:37:14.842838 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-01-10 14:37:14.842845 | orchestrator | Saturday 10 January 2026 14:36:03 +0000 (0:00:00.677) 0:01:17.068 ****** 2026-01-10 14:37:14.842852 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:37:14.842859 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:14.842866 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:14.842873 | orchestrator | 2026-01-10 14:37:14.842880 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-01-10 14:37:14.842896 | orchestrator | Saturday 10 January 2026 14:36:03 +0000 (0:00:00.308) 0:01:17.376 ****** 2026-01-10 14:37:14.842904 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:37:14.842911 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:14.842918 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:14.842925 | orchestrator | 2026-01-10 14:37:14.842932 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-01-10 14:37:14.842939 | orchestrator | Saturday 10 January 2026 14:36:03 +0000 (0:00:00.315) 0:01:17.692 ****** 2026-01-10 14:37:14.842946 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:37:14.842953 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:14.842960 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:14.842967 | orchestrator | 2026-01-10 14:37:14.842974 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-01-10 14:37:14.842980 | orchestrator | Saturday 10 January 2026 14:36:04 +0000 (0:00:00.323) 0:01:18.015 ****** 2026-01-10 14:37:14.843133 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:37:14.843145 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:14.843152 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:14.843159 | orchestrator | 2026-01-10 14:37:14.843166 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-01-10 14:37:14.843173 | orchestrator | Saturday 10 January 2026 14:36:04 +0000 (0:00:00.510) 0:01:18.526 ****** 2026-01-10 14:37:14.843179 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:37:14.843186 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:14.843193 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:14.843200 | orchestrator | 2026-01-10 14:37:14.843206 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-01-10 14:37:14.843213 | orchestrator | Saturday 10 January 2026 14:36:04 +0000 (0:00:00.285) 0:01:18.811 ****** 2026-01-10 14:37:14.843220 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:37:14.843227 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:14.843242 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:14.843247 | orchestrator | 2026-01-10 14:37:14.843261 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-01-10 14:37:14.843267 | orchestrator | Saturday 10 January 2026 14:36:05 +0000 (0:00:00.307) 0:01:19.119 ****** 2026-01-10 14:37:14.843273 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:37:14.843279 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:14.843286 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:14.843292 | orchestrator | 2026-01-10 14:37:14.843298 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-01-10 14:37:14.843304 | orchestrator | Saturday 10 January 2026 14:36:05 +0000 (0:00:00.333) 0:01:19.452 ****** 2026-01-10 14:37:14.843310 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:37:14.843317 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:14.843323 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:14.843330 | orchestrator | 2026-01-10 14:37:14.843336 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-10 14:37:14.843343 | orchestrator | Saturday 10 January 2026 14:36:05 +0000 (0:00:00.306) 0:01:19.758 ****** 2026-01-10 14:37:14.843350 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:37:14.843357 | orchestrator | 2026-01-10 14:37:14.843363 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-01-10 14:37:14.843369 | orchestrator | Saturday 10 January 2026 14:36:06 +0000 (0:00:00.788) 0:01:20.547 ****** 2026-01-10 14:37:14.843375 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:37:14.843381 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:37:14.843387 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:37:14.843393 | orchestrator | 2026-01-10 14:37:14.843399 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-01-10 14:37:14.843405 | orchestrator | Saturday 10 January 2026 14:36:07 +0000 (0:00:00.445) 0:01:20.992 ****** 2026-01-10 14:37:14.843410 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:37:14.843416 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:37:14.843422 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:37:14.843429 | orchestrator | 2026-01-10 14:37:14.843435 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-01-10 14:37:14.843441 | orchestrator | Saturday 10 January 2026 14:36:07 +0000 (0:00:00.431) 0:01:21.423 ****** 2026-01-10 14:37:14.843447 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:37:14.843453 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:14.843459 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:14.843465 | orchestrator | 2026-01-10 14:37:14.843472 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-01-10 14:37:14.843478 | orchestrator | Saturday 10 January 2026 14:36:08 +0000 (0:00:00.531) 0:01:21.954 ****** 2026-01-10 14:37:14.843484 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:37:14.843490 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:14.843496 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:14.843502 | orchestrator | 2026-01-10 14:37:14.843509 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-01-10 14:37:14.843516 | orchestrator | Saturday 10 January 2026 14:36:08 +0000 (0:00:00.353) 0:01:22.308 ****** 2026-01-10 14:37:14.843523 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:37:14.843529 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:14.843536 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:14.843543 | orchestrator | 2026-01-10 14:37:14.843550 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-01-10 14:37:14.843557 | orchestrator | Saturday 10 January 2026 14:36:08 +0000 (0:00:00.334) 0:01:22.643 ****** 2026-01-10 14:37:14.843564 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:37:14.843571 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:14.843577 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:14.843584 | orchestrator | 2026-01-10 14:37:14.843598 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-01-10 14:37:14.843604 | orchestrator | Saturday 10 January 2026 14:36:09 +0000 (0:00:00.504) 0:01:23.148 ****** 2026-01-10 14:37:14.843611 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:37:14.843617 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:14.843623 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:14.843628 | orchestrator | 2026-01-10 14:37:14.843640 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-01-10 14:37:14.843646 | orchestrator | Saturday 10 January 2026 14:36:09 +0000 (0:00:00.590) 0:01:23.738 ****** 2026-01-10 14:37:14.843651 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:37:14.843657 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:14.843663 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:14.843669 | orchestrator | 2026-01-10 14:37:14.843675 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-10 14:37:14.843682 | orchestrator | Saturday 10 January 2026 14:36:10 +0000 (0:00:00.341) 0:01:24.079 ****** 2026-01-10 14:37:14.843690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.843698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.843713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/k2026-01-10 14:37:14 | INFO  | Task dfb308a8-9aed-4c35-936d-6fb4f84c96d6 is in state SUCCESS 2026-01-10 14:37:14.843722 | orchestrator | olla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.843730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.843739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.843746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.843826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.843847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.843860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.843867 | orchestrator | 2026-01-10 14:37:14.843874 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-10 14:37:14.843880 | orchestrator | Saturday 10 January 2026 14:36:11 +0000 (0:00:01.562) 0:01:25.641 ****** 2026-01-10 14:37:14.843888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.843895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.843908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.843914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.843921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.843928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.843934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.843945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.843953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.843960 | orchestrator | 2026-01-10 14:37:14.843966 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-01-10 14:37:14.843973 | orchestrator | Saturday 10 January 2026 14:36:15 +0000 (0:00:04.161) 0:01:29.803 ****** 2026-01-10 14:37:14.843983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.843990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.843996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.844010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.844017 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.844024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.844031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.844041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.844048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.844053 | orchestrator | 2026-01-10 14:37:14.844059 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-10 14:37:14.844064 | orchestrator | Saturday 10 January 2026 14:36:18 +0000 (0:00:02.448) 0:01:32.252 ****** 2026-01-10 14:37:14.844070 | orchestrator | 2026-01-10 14:37:14.844076 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-10 14:37:14.844081 | orchestrator | Saturday 10 January 2026 14:36:18 +0000 (0:00:00.065) 0:01:32.317 ****** 2026-01-10 14:37:14.844087 | orchestrator | 2026-01-10 14:37:14.844092 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-10 14:37:14.844103 | orchestrator | Saturday 10 January 2026 14:36:18 +0000 (0:00:00.063) 0:01:32.380 ****** 2026-01-10 14:37:14.844110 | orchestrator | 2026-01-10 14:37:14.844116 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-10 14:37:14.844121 | orchestrator | Saturday 10 January 2026 14:36:18 +0000 (0:00:00.072) 0:01:32.453 ****** 2026-01-10 14:37:14.844128 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:37:14.844134 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:37:14.844139 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:37:14.844146 | orchestrator | 2026-01-10 14:37:14.844152 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-10 14:37:14.844158 | orchestrator | Saturday 10 January 2026 14:36:21 +0000 (0:00:02.712) 0:01:35.165 ****** 2026-01-10 14:37:14.844165 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:37:14.844171 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:37:14.844177 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:37:14.844183 | orchestrator | 2026-01-10 14:37:14.844188 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-01-10 14:37:14.844194 | orchestrator | Saturday 10 January 2026 14:36:30 +0000 (0:00:08.876) 0:01:44.042 ****** 2026-01-10 14:37:14.844200 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:37:14.844206 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:37:14.844211 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:37:14.844217 | orchestrator | 2026-01-10 14:37:14.844222 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-10 14:37:14.844228 | orchestrator | Saturday 10 January 2026 14:36:32 +0000 (0:00:02.619) 0:01:46.661 ****** 2026-01-10 14:37:14.844234 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:37:14.844240 | orchestrator | 2026-01-10 14:37:14.844246 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-10 14:37:14.844252 | orchestrator | Saturday 10 January 2026 14:36:33 +0000 (0:00:00.400) 0:01:47.062 ****** 2026-01-10 14:37:14.844258 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:37:14.844270 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:37:14.844285 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:37:14.844291 | orchestrator | 2026-01-10 14:37:14.844297 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-10 14:37:14.844304 | orchestrator | Saturday 10 January 2026 14:36:34 +0000 (0:00:00.977) 0:01:48.039 ****** 2026-01-10 14:37:14.844310 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:14.844316 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:14.844322 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:37:14.844327 | orchestrator | 2026-01-10 14:37:14.844333 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-10 14:37:14.844339 | orchestrator | Saturday 10 January 2026 14:36:34 +0000 (0:00:00.595) 0:01:48.634 ****** 2026-01-10 14:37:14.844345 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:37:14.844351 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:37:14.844370 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:37:14.844376 | orchestrator | 2026-01-10 14:37:14.844381 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-10 14:37:14.844394 | orchestrator | Saturday 10 January 2026 14:36:35 +0000 (0:00:00.717) 0:01:49.352 ****** 2026-01-10 14:37:14.844400 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:14.844406 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:14.844411 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:37:14.844418 | orchestrator | 2026-01-10 14:37:14.844424 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-10 14:37:14.844430 | orchestrator | Saturday 10 January 2026 14:36:36 +0000 (0:00:00.570) 0:01:49.922 ****** 2026-01-10 14:37:14.844437 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:37:14.844443 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:37:14.844450 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:37:14.844457 | orchestrator | 2026-01-10 14:37:14.844464 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-10 14:37:14.844470 | orchestrator | Saturday 10 January 2026 14:36:37 +0000 (0:00:01.066) 0:01:50.989 ****** 2026-01-10 14:37:14.844477 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:37:14.844483 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:37:14.844489 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:37:14.844495 | orchestrator | 2026-01-10 14:37:14.844501 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-01-10 14:37:14.844507 | orchestrator | Saturday 10 January 2026 14:36:37 +0000 (0:00:00.751) 0:01:51.740 ****** 2026-01-10 14:37:14.844513 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:37:14.844519 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:37:14.844524 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:37:14.844530 | orchestrator | 2026-01-10 14:37:14.844536 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-10 14:37:14.844542 | orchestrator | Saturday 10 January 2026 14:36:38 +0000 (0:00:00.318) 0:01:52.059 ****** 2026-01-10 14:37:14.844550 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.844557 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.844569 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.844582 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.844591 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.844604 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.844611 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.844618 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.844625 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.844632 | orchestrator | 2026-01-10 14:37:14.844639 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-10 14:37:14.844646 | orchestrator | Saturday 10 January 2026 14:36:39 +0000 (0:00:01.425) 0:01:53.485 ****** 2026-01-10 14:37:14.844652 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.844659 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.844665 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.844681 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.844688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.844695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.844708 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.844716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.844722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.844728 | orchestrator | 2026-01-10 14:37:14.844735 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-01-10 14:37:14.844742 | orchestrator | Saturday 10 January 2026 14:36:44 +0000 (0:00:04.989) 0:01:58.475 ****** 2026-01-10 14:37:14.844748 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.844779 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.844786 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.844807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.844814 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.844821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.844859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.844873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.844881 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:14.844888 | orchestrator | 2026-01-10 14:37:14.844895 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-10 14:37:14.844901 | orchestrator | Saturday 10 January 2026 14:36:47 +0000 (0:00:03.275) 0:02:01.751 ****** 2026-01-10 14:37:14.844906 | orchestrator | 2026-01-10 14:37:14.844913 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-10 14:37:14.844920 | orchestrator | Saturday 10 January 2026 14:36:47 +0000 (0:00:00.067) 0:02:01.819 ****** 2026-01-10 14:37:14.844926 | orchestrator | 2026-01-10 14:37:14.844933 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-10 14:37:14.844939 | orchestrator | Saturday 10 January 2026 14:36:48 +0000 (0:00:00.071) 0:02:01.890 ****** 2026-01-10 14:37:14.844947 | orchestrator | 2026-01-10 14:37:14.844953 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-10 14:37:14.844960 | orchestrator | Saturday 10 January 2026 14:36:48 +0000 (0:00:00.103) 0:02:01.994 ****** 2026-01-10 14:37:14.844966 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:37:14.844973 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:37:14.844980 | orchestrator | 2026-01-10 14:37:14.844987 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-10 14:37:14.844999 | orchestrator | Saturday 10 January 2026 14:36:54 +0000 (0:00:06.561) 0:02:08.555 ****** 2026-01-10 14:37:14.845006 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:37:14.845013 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:37:14.845020 | orchestrator | 2026-01-10 14:37:14.845027 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-01-10 14:37:14.845034 | orchestrator | Saturday 10 January 2026 14:37:01 +0000 (0:00:06.589) 0:02:15.145 ****** 2026-01-10 14:37:14.845041 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:37:14.845047 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:37:14.845054 | orchestrator | 2026-01-10 14:37:14.845061 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-10 14:37:14.845068 | orchestrator | Saturday 10 January 2026 14:37:07 +0000 (0:00:06.520) 0:02:21.665 ****** 2026-01-10 14:37:14.845074 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:37:14.845081 | orchestrator | 2026-01-10 14:37:14.845088 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-10 14:37:14.845094 | orchestrator | Saturday 10 January 2026 14:37:07 +0000 (0:00:00.145) 0:02:21.811 ****** 2026-01-10 14:37:14.845100 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:37:14.845106 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:37:14.845112 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:37:14.845119 | orchestrator | 2026-01-10 14:37:14.845126 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-10 14:37:14.845133 | orchestrator | Saturday 10 January 2026 14:37:08 +0000 (0:00:00.831) 0:02:22.642 ****** 2026-01-10 14:37:14.845140 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:14.845147 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:14.845154 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:37:14.845160 | orchestrator | 2026-01-10 14:37:14.845171 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-10 14:37:14.845178 | orchestrator | Saturday 10 January 2026 14:37:09 +0000 (0:00:00.737) 0:02:23.379 ****** 2026-01-10 14:37:14.845185 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:37:14.845191 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:37:14.845197 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:37:14.845203 | orchestrator | 2026-01-10 14:37:14.845210 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-10 14:37:14.845217 | orchestrator | Saturday 10 January 2026 14:37:10 +0000 (0:00:00.894) 0:02:24.274 ****** 2026-01-10 14:37:14.845224 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:14.845231 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:14.845238 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:37:14.845245 | orchestrator | 2026-01-10 14:37:14.845252 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-10 14:37:14.845259 | orchestrator | Saturday 10 January 2026 14:37:11 +0000 (0:00:00.664) 0:02:24.939 ****** 2026-01-10 14:37:14.845266 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:37:14.845273 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:37:14.845280 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:37:14.845286 | orchestrator | 2026-01-10 14:37:14.845293 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-10 14:37:14.845300 | orchestrator | Saturday 10 January 2026 14:37:11 +0000 (0:00:00.792) 0:02:25.732 ****** 2026-01-10 14:37:14.845307 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:37:14.845314 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:37:14.845321 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:37:14.845328 | orchestrator | 2026-01-10 14:37:14.845335 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:37:14.845343 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-10 14:37:14.845358 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-01-10 14:37:14.845372 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-01-10 14:37:14.845378 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:37:14.845385 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:37:14.845391 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:37:14.845397 | orchestrator | 2026-01-10 14:37:14.845404 | orchestrator | 2026-01-10 14:37:14.845411 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:37:14.845418 | orchestrator | Saturday 10 January 2026 14:37:12 +0000 (0:00:01.044) 0:02:26.777 ****** 2026-01-10 14:37:14.845426 | orchestrator | =============================================================================== 2026-01-10 14:37:14.845433 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 29.62s 2026-01-10 14:37:14.845440 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 22.20s 2026-01-10 14:37:14.845447 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 15.47s 2026-01-10 14:37:14.845454 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 9.27s 2026-01-10 14:37:14.845460 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 9.14s 2026-01-10 14:37:14.845467 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.99s 2026-01-10 14:37:14.845474 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.16s 2026-01-10 14:37:14.845481 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.28s 2026-01-10 14:37:14.845488 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.09s 2026-01-10 14:37:14.845495 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.70s 2026-01-10 14:37:14.845502 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.45s 2026-01-10 14:37:14.845509 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.36s 2026-01-10 14:37:14.845516 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 2.21s 2026-01-10 14:37:14.845523 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.09s 2026-01-10 14:37:14.845530 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.68s 2026-01-10 14:37:14.845537 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.56s 2026-01-10 14:37:14.845544 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.51s 2026-01-10 14:37:14.845551 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.43s 2026-01-10 14:37:14.845557 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.18s 2026-01-10 14:37:14.845564 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.07s 2026-01-10 14:37:14.845571 | orchestrator | 2026-01-10 14:37:14 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:37:14.845579 | orchestrator | 2026-01-10 14:37:14 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:37:14.845590 | orchestrator | 2026-01-10 14:37:14 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:17.895862 | orchestrator | 2026-01-10 14:37:17 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:37:17.898405 | orchestrator | 2026-01-10 14:37:17 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:37:17.898485 | orchestrator | 2026-01-10 14:37:17 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:20.941723 | orchestrator | 2026-01-10 14:37:20 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:37:20.943958 | orchestrator | 2026-01-10 14:37:20 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:37:20.944261 | orchestrator | 2026-01-10 14:37:20 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:23.988383 | orchestrator | 2026-01-10 14:37:23 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:37:23.990227 | orchestrator | 2026-01-10 14:37:23 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:37:23.990429 | orchestrator | 2026-01-10 14:37:23 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:27.048000 | orchestrator | 2026-01-10 14:37:27 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:37:27.049982 | orchestrator | 2026-01-10 14:37:27 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:37:27.050313 | orchestrator | 2026-01-10 14:37:27 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:30.125271 | orchestrator | 2026-01-10 14:37:30 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:37:30.128859 | orchestrator | 2026-01-10 14:37:30 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:37:30.128900 | orchestrator | 2026-01-10 14:37:30 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:33.171848 | orchestrator | 2026-01-10 14:37:33 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:37:33.173780 | orchestrator | 2026-01-10 14:37:33 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:37:33.173835 | orchestrator | 2026-01-10 14:37:33 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:36.233599 | orchestrator | 2026-01-10 14:37:36 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:37:36.236912 | orchestrator | 2026-01-10 14:37:36 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:37:36.236999 | orchestrator | 2026-01-10 14:37:36 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:39.285136 | orchestrator | 2026-01-10 14:37:39 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:37:39.286531 | orchestrator | 2026-01-10 14:37:39 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:37:39.286594 | orchestrator | 2026-01-10 14:37:39 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:42.323617 | orchestrator | 2026-01-10 14:37:42 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:37:42.324485 | orchestrator | 2026-01-10 14:37:42 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:37:42.324522 | orchestrator | 2026-01-10 14:37:42 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:45.372198 | orchestrator | 2026-01-10 14:37:45 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:37:45.378114 | orchestrator | 2026-01-10 14:37:45 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:37:45.378435 | orchestrator | 2026-01-10 14:37:45 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:48.426472 | orchestrator | 2026-01-10 14:37:48 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:37:48.429218 | orchestrator | 2026-01-10 14:37:48 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:37:48.429313 | orchestrator | 2026-01-10 14:37:48 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:51.489822 | orchestrator | 2026-01-10 14:37:51 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:37:51.490111 | orchestrator | 2026-01-10 14:37:51 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:37:51.490147 | orchestrator | 2026-01-10 14:37:51 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:54.573447 | orchestrator | 2026-01-10 14:37:54 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:37:54.574910 | orchestrator | 2026-01-10 14:37:54 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:37:54.574981 | orchestrator | 2026-01-10 14:37:54 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:57.610508 | orchestrator | 2026-01-10 14:37:57 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:37:57.613221 | orchestrator | 2026-01-10 14:37:57 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:37:57.613277 | orchestrator | 2026-01-10 14:37:57 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:00.652052 | orchestrator | 2026-01-10 14:38:00 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:38:00.655360 | orchestrator | 2026-01-10 14:38:00 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:38:00.655429 | orchestrator | 2026-01-10 14:38:00 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:03.708170 | orchestrator | 2026-01-10 14:38:03 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:38:03.710062 | orchestrator | 2026-01-10 14:38:03 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:38:03.710100 | orchestrator | 2026-01-10 14:38:03 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:06.755195 | orchestrator | 2026-01-10 14:38:06 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:38:06.755459 | orchestrator | 2026-01-10 14:38:06 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:38:06.755641 | orchestrator | 2026-01-10 14:38:06 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:09.809519 | orchestrator | 2026-01-10 14:38:09 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:38:09.810324 | orchestrator | 2026-01-10 14:38:09 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:38:09.810405 | orchestrator | 2026-01-10 14:38:09 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:12.857049 | orchestrator | 2026-01-10 14:38:12 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:38:12.857214 | orchestrator | 2026-01-10 14:38:12 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:38:12.857230 | orchestrator | 2026-01-10 14:38:12 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:15.892418 | orchestrator | 2026-01-10 14:38:15 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:38:15.893296 | orchestrator | 2026-01-10 14:38:15 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:38:15.893322 | orchestrator | 2026-01-10 14:38:15 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:18.937154 | orchestrator | 2026-01-10 14:38:18 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:38:18.937665 | orchestrator | 2026-01-10 14:38:18 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:38:18.937772 | orchestrator | 2026-01-10 14:38:18 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:21.991819 | orchestrator | 2026-01-10 14:38:21 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:38:21.993046 | orchestrator | 2026-01-10 14:38:21 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:38:21.993098 | orchestrator | 2026-01-10 14:38:21 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:25.048861 | orchestrator | 2026-01-10 14:38:25 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:38:25.049380 | orchestrator | 2026-01-10 14:38:25 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:38:25.049432 | orchestrator | 2026-01-10 14:38:25 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:28.083854 | orchestrator | 2026-01-10 14:38:28 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:38:28.086215 | orchestrator | 2026-01-10 14:38:28 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:38:28.086289 | orchestrator | 2026-01-10 14:38:28 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:31.130403 | orchestrator | 2026-01-10 14:38:31 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:38:31.131627 | orchestrator | 2026-01-10 14:38:31 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:38:31.131716 | orchestrator | 2026-01-10 14:38:31 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:34.162898 | orchestrator | 2026-01-10 14:38:34 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:38:34.164448 | orchestrator | 2026-01-10 14:38:34 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:38:34.164648 | orchestrator | 2026-01-10 14:38:34 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:37.217263 | orchestrator | 2026-01-10 14:38:37 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:38:37.219783 | orchestrator | 2026-01-10 14:38:37 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:38:37.219864 | orchestrator | 2026-01-10 14:38:37 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:40.267739 | orchestrator | 2026-01-10 14:38:40 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:38:40.268933 | orchestrator | 2026-01-10 14:38:40 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:38:40.269002 | orchestrator | 2026-01-10 14:38:40 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:43.318364 | orchestrator | 2026-01-10 14:38:43 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:38:43.318452 | orchestrator | 2026-01-10 14:38:43 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:38:43.318463 | orchestrator | 2026-01-10 14:38:43 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:46.356422 | orchestrator | 2026-01-10 14:38:46 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:38:46.356897 | orchestrator | 2026-01-10 14:38:46 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:38:46.356955 | orchestrator | 2026-01-10 14:38:46 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:49.402753 | orchestrator | 2026-01-10 14:38:49 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:38:49.404610 | orchestrator | 2026-01-10 14:38:49 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:38:49.404727 | orchestrator | 2026-01-10 14:38:49 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:52.457829 | orchestrator | 2026-01-10 14:38:52 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:38:52.461764 | orchestrator | 2026-01-10 14:38:52 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:38:52.461842 | orchestrator | 2026-01-10 14:38:52 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:55.514330 | orchestrator | 2026-01-10 14:38:55 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:38:55.518541 | orchestrator | 2026-01-10 14:38:55 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:38:55.518622 | orchestrator | 2026-01-10 14:38:55 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:58.557693 | orchestrator | 2026-01-10 14:38:58 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:38:58.560301 | orchestrator | 2026-01-10 14:38:58 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:38:58.560624 | orchestrator | 2026-01-10 14:38:58 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:01.605834 | orchestrator | 2026-01-10 14:39:01 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:39:01.606956 | orchestrator | 2026-01-10 14:39:01 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:39:01.607021 | orchestrator | 2026-01-10 14:39:01 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:04.652546 | orchestrator | 2026-01-10 14:39:04 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:39:04.654206 | orchestrator | 2026-01-10 14:39:04 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:39:04.654258 | orchestrator | 2026-01-10 14:39:04 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:07.699517 | orchestrator | 2026-01-10 14:39:07 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:39:07.700330 | orchestrator | 2026-01-10 14:39:07 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:39:07.700380 | orchestrator | 2026-01-10 14:39:07 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:10.745070 | orchestrator | 2026-01-10 14:39:10 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:39:10.746209 | orchestrator | 2026-01-10 14:39:10 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:39:10.746236 | orchestrator | 2026-01-10 14:39:10 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:13.789776 | orchestrator | 2026-01-10 14:39:13 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:39:13.789966 | orchestrator | 2026-01-10 14:39:13 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:39:13.792155 | orchestrator | 2026-01-10 14:39:13 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:16.836135 | orchestrator | 2026-01-10 14:39:16 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:39:16.837781 | orchestrator | 2026-01-10 14:39:16 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:39:16.839096 | orchestrator | 2026-01-10 14:39:16 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:19.883317 | orchestrator | 2026-01-10 14:39:19 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:39:19.884552 | orchestrator | 2026-01-10 14:39:19 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:39:19.884603 | orchestrator | 2026-01-10 14:39:19 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:22.932407 | orchestrator | 2026-01-10 14:39:22 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:39:22.932490 | orchestrator | 2026-01-10 14:39:22 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:39:22.933094 | orchestrator | 2026-01-10 14:39:22 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:25.982145 | orchestrator | 2026-01-10 14:39:25 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:39:25.983240 | orchestrator | 2026-01-10 14:39:25 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:39:25.984186 | orchestrator | 2026-01-10 14:39:25 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:29.036088 | orchestrator | 2026-01-10 14:39:29 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:39:29.036894 | orchestrator | 2026-01-10 14:39:29 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:39:29.036925 | orchestrator | 2026-01-10 14:39:29 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:32.076945 | orchestrator | 2026-01-10 14:39:32 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:39:32.078718 | orchestrator | 2026-01-10 14:39:32 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:39:32.078792 | orchestrator | 2026-01-10 14:39:32 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:35.115521 | orchestrator | 2026-01-10 14:39:35 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:39:35.118479 | orchestrator | 2026-01-10 14:39:35 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:39:35.118557 | orchestrator | 2026-01-10 14:39:35 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:38.170638 | orchestrator | 2026-01-10 14:39:38 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:39:38.170712 | orchestrator | 2026-01-10 14:39:38 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:39:38.170726 | orchestrator | 2026-01-10 14:39:38 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:41.219766 | orchestrator | 2026-01-10 14:39:41 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:39:41.221292 | orchestrator | 2026-01-10 14:39:41 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:39:41.224700 | orchestrator | 2026-01-10 14:39:41 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:44.256329 | orchestrator | 2026-01-10 14:39:44 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:39:44.256723 | orchestrator | 2026-01-10 14:39:44 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:39:44.257114 | orchestrator | 2026-01-10 14:39:44 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:47.303277 | orchestrator | 2026-01-10 14:39:47 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:39:47.303873 | orchestrator | 2026-01-10 14:39:47 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:39:47.303910 | orchestrator | 2026-01-10 14:39:47 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:50.347326 | orchestrator | 2026-01-10 14:39:50 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:39:50.347727 | orchestrator | 2026-01-10 14:39:50 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:39:50.347930 | orchestrator | 2026-01-10 14:39:50 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:53.385263 | orchestrator | 2026-01-10 14:39:53 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:39:53.385846 | orchestrator | 2026-01-10 14:39:53 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:39:53.386259 | orchestrator | 2026-01-10 14:39:53 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:56.432938 | orchestrator | 2026-01-10 14:39:56 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:39:56.433663 | orchestrator | 2026-01-10 14:39:56 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:39:56.433699 | orchestrator | 2026-01-10 14:39:56 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:59.483099 | orchestrator | 2026-01-10 14:39:59 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:39:59.484384 | orchestrator | 2026-01-10 14:39:59 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:39:59.485292 | orchestrator | 2026-01-10 14:39:59 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:02.529819 | orchestrator | 2026-01-10 14:40:02 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:40:02.532155 | orchestrator | 2026-01-10 14:40:02 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:40:02.532235 | orchestrator | 2026-01-10 14:40:02 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:05.578828 | orchestrator | 2026-01-10 14:40:05 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:40:05.580478 | orchestrator | 2026-01-10 14:40:05 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:40:05.583289 | orchestrator | 2026-01-10 14:40:05 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:08.623897 | orchestrator | 2026-01-10 14:40:08 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:40:08.627137 | orchestrator | 2026-01-10 14:40:08 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:40:08.627198 | orchestrator | 2026-01-10 14:40:08 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:11.672341 | orchestrator | 2026-01-10 14:40:11 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:40:11.673131 | orchestrator | 2026-01-10 14:40:11 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:40:11.673174 | orchestrator | 2026-01-10 14:40:11 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:14.717133 | orchestrator | 2026-01-10 14:40:14 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:40:14.717626 | orchestrator | 2026-01-10 14:40:14 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:40:14.717772 | orchestrator | 2026-01-10 14:40:14 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:17.760392 | orchestrator | 2026-01-10 14:40:17 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state STARTED 2026-01-10 14:40:17.762315 | orchestrator | 2026-01-10 14:40:17 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:40:17.762353 | orchestrator | 2026-01-10 14:40:17 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:20.804504 | orchestrator | 2026-01-10 14:40:20 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:40:20.811163 | orchestrator | 2026-01-10 14:40:20 | INFO  | Task 955b2409-5d44-41fc-9935-7bfdb67baec4 is in state SUCCESS 2026-01-10 14:40:20.813478 | orchestrator | 2026-01-10 14:40:20.813553 | orchestrator | 2026-01-10 14:40:20.813623 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:40:20.813634 | orchestrator | 2026-01-10 14:40:20.813651 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:40:20.813661 | orchestrator | Saturday 10 January 2026 14:33:37 +0000 (0:00:00.396) 0:00:00.396 ****** 2026-01-10 14:40:20.813669 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:40:20.813678 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:40:20.813687 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:40:20.813695 | orchestrator | 2026-01-10 14:40:20.813703 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:40:20.813711 | orchestrator | Saturday 10 January 2026 14:33:37 +0000 (0:00:00.453) 0:00:00.850 ****** 2026-01-10 14:40:20.813719 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-01-10 14:40:20.813727 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-01-10 14:40:20.813735 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-01-10 14:40:20.813742 | orchestrator | 2026-01-10 14:40:20.813750 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-01-10 14:40:20.813758 | orchestrator | 2026-01-10 14:40:20.813766 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-10 14:40:20.813773 | orchestrator | Saturday 10 January 2026 14:33:38 +0000 (0:00:00.826) 0:00:01.677 ****** 2026-01-10 14:40:20.813898 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:40:20.813906 | orchestrator | 2026-01-10 14:40:20.813914 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-01-10 14:40:20.813922 | orchestrator | Saturday 10 January 2026 14:33:39 +0000 (0:00:01.143) 0:00:02.820 ****** 2026-01-10 14:40:20.813930 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:40:20.813938 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:40:20.813946 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:40:20.814110 | orchestrator | 2026-01-10 14:40:20.814121 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-01-10 14:40:20.814130 | orchestrator | Saturday 10 January 2026 14:33:40 +0000 (0:00:00.891) 0:00:03.712 ****** 2026-01-10 14:40:20.814140 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:40:20.814148 | orchestrator | 2026-01-10 14:40:20.814158 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-01-10 14:40:20.814167 | orchestrator | Saturday 10 January 2026 14:33:42 +0000 (0:00:01.634) 0:00:05.346 ****** 2026-01-10 14:40:20.814176 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:40:20.814185 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:40:20.814194 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:40:20.814204 | orchestrator | 2026-01-10 14:40:20.814217 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-01-10 14:40:20.814231 | orchestrator | Saturday 10 January 2026 14:33:44 +0000 (0:00:01.891) 0:00:07.238 ****** 2026-01-10 14:40:20.814249 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-10 14:40:20.814297 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-10 14:40:20.814312 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-10 14:40:20.814325 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-10 14:40:20.814337 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-10 14:40:20.814350 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-10 14:40:20.814363 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-10 14:40:20.814377 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-10 14:40:20.814389 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-10 14:40:20.814402 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-10 14:40:20.814545 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-10 14:40:20.814628 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-10 14:40:20.814637 | orchestrator | 2026-01-10 14:40:20.814645 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-10 14:40:20.814653 | orchestrator | Saturday 10 January 2026 14:33:46 +0000 (0:00:02.815) 0:00:10.054 ****** 2026-01-10 14:40:20.814661 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-10 14:40:20.814669 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-10 14:40:20.814677 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-10 14:40:20.814685 | orchestrator | 2026-01-10 14:40:20.814693 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-10 14:40:20.814701 | orchestrator | Saturday 10 January 2026 14:33:47 +0000 (0:00:00.948) 0:00:11.003 ****** 2026-01-10 14:40:20.814732 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-10 14:40:20.814741 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-10 14:40:20.814749 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-10 14:40:20.814757 | orchestrator | 2026-01-10 14:40:20.814765 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-10 14:40:20.814778 | orchestrator | Saturday 10 January 2026 14:33:49 +0000 (0:00:01.592) 0:00:12.596 ****** 2026-01-10 14:40:20.814795 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-01-10 14:40:20.814815 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.814860 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-01-10 14:40:20.814874 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.814885 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-01-10 14:40:20.814898 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.814912 | orchestrator | 2026-01-10 14:40:20.814925 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-01-10 14:40:20.814938 | orchestrator | Saturday 10 January 2026 14:33:50 +0000 (0:00:01.270) 0:00:13.866 ****** 2026-01-10 14:40:20.814955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-10 14:40:20.814979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-10 14:40:20.815006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-10 14:40:20.815020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:40:20.815036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:40:20.815070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:40:20.815085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-10 14:40:20.815100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-10 14:40:20.815123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-10 14:40:20.815137 | orchestrator | 2026-01-10 14:40:20.815151 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-01-10 14:40:20.815164 | orchestrator | Saturday 10 January 2026 14:33:52 +0000 (0:00:02.066) 0:00:15.933 ****** 2026-01-10 14:40:20.815177 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:20.815189 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:40:20.815201 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:40:20.815213 | orchestrator | 2026-01-10 14:40:20.815226 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-01-10 14:40:20.815240 | orchestrator | Saturday 10 January 2026 14:33:54 +0000 (0:00:01.634) 0:00:17.567 ****** 2026-01-10 14:40:20.815253 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-01-10 14:40:20.815268 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-01-10 14:40:20.815282 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-01-10 14:40:20.815296 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-01-10 14:40:20.815309 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-01-10 14:40:20.815332 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-01-10 14:40:20.815341 | orchestrator | 2026-01-10 14:40:20.815349 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-01-10 14:40:20.815356 | orchestrator | Saturday 10 January 2026 14:33:57 +0000 (0:00:02.898) 0:00:20.466 ****** 2026-01-10 14:40:20.815364 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:20.815412 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:40:20.815421 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:40:20.815429 | orchestrator | 2026-01-10 14:40:20.815531 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-01-10 14:40:20.815541 | orchestrator | Saturday 10 January 2026 14:33:59 +0000 (0:00:01.839) 0:00:22.305 ****** 2026-01-10 14:40:20.815549 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:40:20.815557 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:40:20.815601 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:40:20.815610 | orchestrator | 2026-01-10 14:40:20.815617 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-01-10 14:40:20.815625 | orchestrator | Saturday 10 January 2026 14:34:03 +0000 (0:00:03.902) 0:00:26.207 ****** 2026-01-10 14:40:20.815636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-10 14:40:20.815674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:40:20.815719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:40:20.815815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f33e554409d4fd9a968e76e37084b1d3d43eda21', '__omit_place_holder__f33e554409d4fd9a968e76e37084b1d3d43eda21'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-10 14:40:20.815830 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.815845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-10 14:40:20.815860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-10 14:40:20.815875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:40:20.815903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:40:20.815920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:40:20.815929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:40:20.815937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f33e554409d4fd9a968e76e37084b1d3d43eda21', '__omit_place_holder__f33e554409d4fd9a968e76e37084b1d3d43eda21'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-10 14:40:20.815945 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.815954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f33e554409d4fd9a968e76e37084b1d3d43eda21', '__omit_place_holder__f33e554409d4fd9a968e76e37084b1d3d43eda21'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-10 14:40:20.816019 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.816028 | orchestrator | 2026-01-10 14:40:20.816036 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-01-10 14:40:20.816044 | orchestrator | Saturday 10 January 2026 14:34:03 +0000 (0:00:00.571) 0:00:26.779 ****** 2026-01-10 14:40:20.816052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-10 14:40:20.816078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-10 14:40:20.816088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-10 14:40:20.816097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:40:20.816105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:40:20.816114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f33e554409d4fd9a968e76e37084b1d3d43eda21', '__omit_place_holder__f33e554409d4fd9a968e76e37084b1d3d43eda21'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-10 14:40:20.816122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:40:20.816130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:40:20.816155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:40:20.816164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f33e554409d4fd9a968e76e37084b1d3d43eda21', '__omit_place_holder__f33e554409d4fd9a968e76e37084b1d3d43eda21'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-10 14:40:20.816173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:40:20.816181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f33e554409d4fd9a968e76e37084b1d3d43eda21', '__omit_place_holder__f33e554409d4fd9a968e76e37084b1d3d43eda21'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-10 14:40:20.816189 | orchestrator | 2026-01-10 14:40:20.816197 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-01-10 14:40:20.816206 | orchestrator | Saturday 10 January 2026 14:34:06 +0000 (0:00:03.312) 0:00:30.092 ****** 2026-01-10 14:40:20.816214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-10 14:40:20.816223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-10 14:40:20.816246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-10 14:40:20.816292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:40:20.816317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:40:20.816384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:40:20.816399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-10 14:40:20.816412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-10 14:40:20.816434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-10 14:40:20.816448 | orchestrator | 2026-01-10 14:40:20.816460 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-01-10 14:40:20.816472 | orchestrator | Saturday 10 January 2026 14:34:11 +0000 (0:00:04.732) 0:00:34.824 ****** 2026-01-10 14:40:20.816485 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-10 14:40:20.816513 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-10 14:40:20.816528 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-10 14:40:20.816540 | orchestrator | 2026-01-10 14:40:20.816554 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-01-10 14:40:20.816671 | orchestrator | Saturday 10 January 2026 14:34:13 +0000 (0:00:02.255) 0:00:37.080 ****** 2026-01-10 14:40:20.816686 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-10 14:40:20.816695 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-10 14:40:20.816709 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-10 14:40:20.816736 | orchestrator | 2026-01-10 14:40:20.816749 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-01-10 14:40:20.816762 | orchestrator | Saturday 10 January 2026 14:34:18 +0000 (0:00:04.284) 0:00:41.365 ****** 2026-01-10 14:40:20.816776 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.816790 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.816802 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.816816 | orchestrator | 2026-01-10 14:40:20.816996 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-01-10 14:40:20.817011 | orchestrator | Saturday 10 January 2026 14:34:18 +0000 (0:00:00.782) 0:00:42.147 ****** 2026-01-10 14:40:20.817019 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-10 14:40:20.817029 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-10 14:40:20.817037 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-10 14:40:20.817045 | orchestrator | 2026-01-10 14:40:20.817053 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-01-10 14:40:20.817060 | orchestrator | Saturday 10 January 2026 14:34:22 +0000 (0:00:03.269) 0:00:45.416 ****** 2026-01-10 14:40:20.817069 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-10 14:40:20.817077 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-10 14:40:20.817085 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-10 14:40:20.817103 | orchestrator | 2026-01-10 14:40:20.817111 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-01-10 14:40:20.817119 | orchestrator | Saturday 10 January 2026 14:34:25 +0000 (0:00:03.321) 0:00:48.738 ****** 2026-01-10 14:40:20.817127 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-01-10 14:40:20.817135 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-01-10 14:40:20.817143 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-01-10 14:40:20.817151 | orchestrator | 2026-01-10 14:40:20.817158 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-01-10 14:40:20.817166 | orchestrator | Saturday 10 January 2026 14:34:27 +0000 (0:00:01.844) 0:00:50.583 ****** 2026-01-10 14:40:20.817174 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-01-10 14:40:20.817182 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-01-10 14:40:20.817189 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-01-10 14:40:20.817198 | orchestrator | 2026-01-10 14:40:20.817206 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-10 14:40:20.817214 | orchestrator | Saturday 10 January 2026 14:34:29 +0000 (0:00:01.748) 0:00:52.331 ****** 2026-01-10 14:40:20.817222 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:40:20.817230 | orchestrator | 2026-01-10 14:40:20.817237 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-01-10 14:40:20.817245 | orchestrator | Saturday 10 January 2026 14:34:29 +0000 (0:00:00.744) 0:00:53.075 ****** 2026-01-10 14:40:20.817255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-10 14:40:20.817281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-10 14:40:20.817292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-10 14:40:20.817300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:40:20.817315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:40:20.817324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:40:20.817413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-10 14:40:20.817431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-10 14:40:20.817470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-10 14:40:20.817654 | orchestrator | 2026-01-10 14:40:20.817677 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-01-10 14:40:20.817692 | orchestrator | Saturday 10 January 2026 14:34:34 +0000 (0:00:04.208) 0:00:57.284 ****** 2026-01-10 14:40:20.817744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-10 14:40:20.817771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:40:20.817781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:40:20.817789 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.817798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-10 14:40:20.817807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:40:20.817829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:40:20.817838 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.817846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-10 14:40:20.817860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:40:20.817869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:40:20.817877 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.817885 | orchestrator | 2026-01-10 14:40:20.817893 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-01-10 14:40:20.817901 | orchestrator | Saturday 10 January 2026 14:34:35 +0000 (0:00:01.072) 0:00:58.356 ****** 2026-01-10 14:40:20.817909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-10 14:40:20.817918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:40:20.817936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:40:20.817945 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.817954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-10 14:40:20.817967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:40:20.817976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:40:20.818075 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.818128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-10 14:40:20.818160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:40:20.818176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:40:20.818189 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.818202 | orchestrator | 2026-01-10 14:40:20.818290 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-10 14:40:20.818299 | orchestrator | Saturday 10 January 2026 14:34:36 +0000 (0:00:00.909) 0:00:59.266 ****** 2026-01-10 14:40:20.818322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-10 14:40:20.818340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:40:20.818349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:40:20.818357 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.818366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-10 14:40:20.818375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:40:20.818384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:40:20.818440 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.818460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-10 14:40:20.818477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:40:20.818491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:40:20.818505 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.818525 | orchestrator | 2026-01-10 14:40:20.818542 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-10 14:40:20.818555 | orchestrator | Saturday 10 January 2026 14:34:37 +0000 (0:00:01.242) 0:01:00.508 ****** 2026-01-10 14:40:20.818634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-10 14:40:20.818648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:40:20.818662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:40:20.818674 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.818688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-10 14:40:20.818797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:40:20.818820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:40:20.818829 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.818837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-10 14:40:20.818846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:40:20.818854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:40:20.818862 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.818870 | orchestrator | 2026-01-10 14:40:20.818878 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-10 14:40:20.818887 | orchestrator | Saturday 10 January 2026 14:34:38 +0000 (0:00:00.696) 0:01:01.205 ****** 2026-01-10 14:40:20.818895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-10 14:40:20.820001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:40:20.820052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:40:20.820059 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.820065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-10 14:40:20.820070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:40:20.820074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:40:20.820078 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.820083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-10 14:40:20.820110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:40:20.820115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:40:20.820119 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.820123 | orchestrator | 2026-01-10 14:40:20.820127 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-01-10 14:40:20.820132 | orchestrator | Saturday 10 January 2026 14:34:38 +0000 (0:00:00.949) 0:01:02.155 ****** 2026-01-10 14:40:20.820136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-10 14:40:20.820140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:40:20.820144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:40:20.820148 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.820152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-10 14:40:20.820159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:40:20.820170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:40:20.820174 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.820178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-10 14:40:20.820182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:40:20.820186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:40:20.820190 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.820194 | orchestrator | 2026-01-10 14:40:20.820198 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-01-10 14:40:20.820202 | orchestrator | Saturday 10 January 2026 14:34:39 +0000 (0:00:00.948) 0:01:03.104 ****** 2026-01-10 14:40:20.820206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-10 14:40:20.820223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:40:20.820233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:40:20.820237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-10 14:40:20.820241 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.820245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:40:20.820249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:40:20.820253 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.820257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-10 14:40:20.820265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:40:20.820269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:40:20.820273 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.820277 | orchestrator | 2026-01-10 14:40:20.820280 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-01-10 14:40:20.820289 | orchestrator | Saturday 10 January 2026 14:34:40 +0000 (0:00:00.596) 0:01:03.700 ****** 2026-01-10 14:40:20.820294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-10 14:40:20.820298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:40:20.820302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:40:20.820305 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.820309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-10 14:40:20.820317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:40:20.820321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:40:20.820325 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.820334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-10 14:40:20.820339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:40:20.820343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:40:20.820347 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.820350 | orchestrator | 2026-01-10 14:40:20.820354 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-01-10 14:40:20.820358 | orchestrator | Saturday 10 January 2026 14:34:41 +0000 (0:00:00.851) 0:01:04.552 ****** 2026-01-10 14:40:20.820365 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-10 14:40:20.820370 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-10 14:40:20.820374 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-10 14:40:20.820377 | orchestrator | 2026-01-10 14:40:20.820381 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-01-10 14:40:20.820385 | orchestrator | Saturday 10 January 2026 14:34:43 +0000 (0:00:02.406) 0:01:06.959 ****** 2026-01-10 14:40:20.820389 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-10 14:40:20.820393 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-10 14:40:20.820396 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-10 14:40:20.820400 | orchestrator | 2026-01-10 14:40:20.820404 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-01-10 14:40:20.820408 | orchestrator | Saturday 10 January 2026 14:34:45 +0000 (0:00:01.814) 0:01:08.773 ****** 2026-01-10 14:40:20.820411 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-10 14:40:20.820415 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-10 14:40:20.820419 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-10 14:40:20.820422 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-10 14:40:20.820426 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.820430 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-10 14:40:20.820434 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.820437 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-10 14:40:20.820441 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.820445 | orchestrator | 2026-01-10 14:40:20.820449 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-01-10 14:40:20.820453 | orchestrator | Saturday 10 January 2026 14:34:46 +0000 (0:00:01.216) 0:01:09.990 ****** 2026-01-10 14:40:20.820462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-10 14:40:20.820468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-10 14:40:20.820472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-10 14:40:20.820478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:40:20.820482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:40:20.820487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:40:20.820491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-10 14:40:20.820500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-10 14:40:20.820505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-10 14:40:20.820511 | orchestrator | 2026-01-10 14:40:20.820515 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-01-10 14:40:20.820519 | orchestrator | Saturday 10 January 2026 14:34:50 +0000 (0:00:03.174) 0:01:13.164 ****** 2026-01-10 14:40:20.820523 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:40:20.820527 | orchestrator | 2026-01-10 14:40:20.820530 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-01-10 14:40:20.820534 | orchestrator | Saturday 10 January 2026 14:34:50 +0000 (0:00:00.746) 0:01:13.911 ****** 2026-01-10 14:40:20.820539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-10 14:40:20.820544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-10 14:40:20.820548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.820553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.820650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-10 14:40:20.820663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-10 14:40:20.820668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.820674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.820681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-10 14:40:20.820687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-10 14:40:20.820703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.820718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.820728 | orchestrator | 2026-01-10 14:40:20.820734 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-01-10 14:40:20.820740 | orchestrator | Saturday 10 January 2026 14:34:55 +0000 (0:00:05.161) 0:01:19.073 ****** 2026-01-10 14:40:20.820747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-10 14:40:20.820753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-10 14:40:20.820760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.820766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.820772 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.820788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-10 14:40:20.820800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-10 14:40:20.820806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.820813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.820819 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.820826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-10 14:40:20.820832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-10 14:40:20.820849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.820854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.820858 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.820862 | orchestrator | 2026-01-10 14:40:20.820866 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-01-10 14:40:20.820870 | orchestrator | Saturday 10 January 2026 14:34:57 +0000 (0:00:01.734) 0:01:20.808 ****** 2026-01-10 14:40:20.820874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-10 14:40:20.820880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-10 14:40:20.820884 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.820888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-10 14:40:20.820892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-10 14:40:20.820896 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.820899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-10 14:40:20.820903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-10 14:40:20.820907 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.820911 | orchestrator | 2026-01-10 14:40:20.820915 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-01-10 14:40:20.820919 | orchestrator | Saturday 10 January 2026 14:34:59 +0000 (0:00:01.678) 0:01:22.487 ****** 2026-01-10 14:40:20.820922 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:20.820927 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:40:20.820930 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:40:20.820934 | orchestrator | 2026-01-10 14:40:20.820938 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-01-10 14:40:20.820942 | orchestrator | Saturday 10 January 2026 14:35:00 +0000 (0:00:01.551) 0:01:24.038 ****** 2026-01-10 14:40:20.820945 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:20.820949 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:40:20.820953 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:40:20.820960 | orchestrator | 2026-01-10 14:40:20.820964 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-01-10 14:40:20.820967 | orchestrator | Saturday 10 January 2026 14:35:03 +0000 (0:00:02.220) 0:01:26.258 ****** 2026-01-10 14:40:20.820971 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:40:20.820975 | orchestrator | 2026-01-10 14:40:20.820979 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-01-10 14:40:20.820983 | orchestrator | Saturday 10 January 2026 14:35:04 +0000 (0:00:01.718) 0:01:27.976 ****** 2026-01-10 14:40:20.820992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-10 14:40:20.820997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-10 14:40:20.821001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.821006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.821010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.821019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.821027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-10 14:40:20.821031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.821050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.821055 | orchestrator | 2026-01-10 14:40:20.821059 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-01-10 14:40:20.821063 | orchestrator | Saturday 10 January 2026 14:35:09 +0000 (0:00:04.628) 0:01:32.605 ****** 2026-01-10 14:40:20.821067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-10 14:40:20.821075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.821085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.821089 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.821093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-10 14:40:20.821097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.821101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.821109 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.821113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-10 14:40:20.821122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.821126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.821130 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.821134 | orchestrator | 2026-01-10 14:40:20.821138 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-01-10 14:40:20.821141 | orchestrator | Saturday 10 January 2026 14:35:10 +0000 (0:00:00.589) 0:01:33.195 ****** 2026-01-10 14:40:20.821145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-10 14:40:20.821150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-10 14:40:20.821153 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.821157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-10 14:40:20.821161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-10 14:40:20.821165 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.821169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-10 14:40:20.821178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-10 14:40:20.821182 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.821186 | orchestrator | 2026-01-10 14:40:20.821189 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-01-10 14:40:20.821193 | orchestrator | Saturday 10 January 2026 14:35:11 +0000 (0:00:01.008) 0:01:34.204 ****** 2026-01-10 14:40:20.821197 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:20.821201 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:40:20.821204 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:40:20.821208 | orchestrator | 2026-01-10 14:40:20.821212 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-01-10 14:40:20.821216 | orchestrator | Saturday 10 January 2026 14:35:12 +0000 (0:00:01.404) 0:01:35.609 ****** 2026-01-10 14:40:20.821220 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:40:20.821223 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:40:20.821227 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:20.821231 | orchestrator | 2026-01-10 14:40:20.821235 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-01-10 14:40:20.821238 | orchestrator | Saturday 10 January 2026 14:35:15 +0000 (0:00:02.689) 0:01:38.298 ****** 2026-01-10 14:40:20.821242 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.821246 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.821250 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.821253 | orchestrator | 2026-01-10 14:40:20.821257 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-01-10 14:40:20.821261 | orchestrator | Saturday 10 January 2026 14:35:15 +0000 (0:00:00.421) 0:01:38.720 ****** 2026-01-10 14:40:20.821264 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-2, testbed-node-1 2026-01-10 14:40:20.821268 | orchestrator | 2026-01-10 14:40:20.821272 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-01-10 14:40:20.821276 | orchestrator | Saturday 10 January 2026 14:35:16 +0000 (0:00:01.115) 0:01:39.836 ****** 2026-01-10 14:40:20.822136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-01-10 14:40:20.822187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-01-10 14:40:20.822216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-01-10 14:40:20.822231 | orchestrator | 2026-01-10 14:40:20.822245 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-01-10 14:40:20.822259 | orchestrator | Saturday 10 January 2026 14:35:20 +0000 (0:00:04.136) 0:01:43.972 ****** 2026-01-10 14:40:20.822270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-01-10 14:40:20.822278 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.822287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-01-10 14:40:20.822295 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.822317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-01-10 14:40:20.822326 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.822334 | orchestrator | 2026-01-10 14:40:20.822342 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-01-10 14:40:20.822360 | orchestrator | Saturday 10 January 2026 14:35:23 +0000 (0:00:02.274) 0:01:46.247 ****** 2026-01-10 14:40:20.822370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-10 14:40:20.822380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-10 14:40:20.822389 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.822397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-10 14:40:20.822406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-10 14:40:20.822414 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.822421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-10 14:40:20.822430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-10 14:40:20.822438 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.822446 | orchestrator | 2026-01-10 14:40:20.822453 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-01-10 14:40:20.822461 | orchestrator | Saturday 10 January 2026 14:35:25 +0000 (0:00:02.008) 0:01:48.256 ****** 2026-01-10 14:40:20.822469 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.822476 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.822484 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.822492 | orchestrator | 2026-01-10 14:40:20.822499 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-01-10 14:40:20.822507 | orchestrator | Saturday 10 January 2026 14:35:25 +0000 (0:00:00.718) 0:01:48.974 ****** 2026-01-10 14:40:20.822515 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.822523 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.822530 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.822538 | orchestrator | 2026-01-10 14:40:20.822546 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-01-10 14:40:20.822591 | orchestrator | Saturday 10 January 2026 14:35:27 +0000 (0:00:01.349) 0:01:50.323 ****** 2026-01-10 14:40:20.822600 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:40:20.822608 | orchestrator | 2026-01-10 14:40:20.822616 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-01-10 14:40:20.822624 | orchestrator | Saturday 10 January 2026 14:35:28 +0000 (0:00:00.877) 0:01:51.201 ****** 2026-01-10 14:40:20.822632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-10 14:40:20.822642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.822651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.822660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.822677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-10 14:40:20.822691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-10 14:40:20.822700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.822708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.822716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.822725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.822746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.822755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.822763 | orchestrator | 2026-01-10 14:40:20.822771 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-01-10 14:40:20.822779 | orchestrator | Saturday 10 January 2026 14:35:34 +0000 (0:00:06.840) 0:01:58.041 ****** 2026-01-10 14:40:20.822787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-10 14:40:20.822795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.822804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-10 14:40:20.822825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.822834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.822842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.822850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.822858 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.822866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.822874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-10 14:40:20.822887 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.822906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.822915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.822923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.822931 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.822939 | orchestrator | 2026-01-10 14:40:20.822947 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-01-10 14:40:20.822955 | orchestrator | Saturday 10 January 2026 14:35:36 +0000 (0:00:01.474) 0:01:59.515 ****** 2026-01-10 14:40:20.822963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-10 14:40:20.822971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-10 14:40:20.822981 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.822989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-10 14:40:20.822997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-10 14:40:20.823010 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.823018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-10 14:40:20.823026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-10 14:40:20.823034 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.823042 | orchestrator | 2026-01-10 14:40:20.823049 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-01-10 14:40:20.823057 | orchestrator | Saturday 10 January 2026 14:35:37 +0000 (0:00:01.394) 0:02:00.909 ****** 2026-01-10 14:40:20.823065 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:20.823073 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:40:20.823080 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:40:20.823088 | orchestrator | 2026-01-10 14:40:20.823096 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-01-10 14:40:20.823104 | orchestrator | Saturday 10 January 2026 14:35:39 +0000 (0:00:01.338) 0:02:02.248 ****** 2026-01-10 14:40:20.823112 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:20.823119 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:40:20.823127 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:40:20.823135 | orchestrator | 2026-01-10 14:40:20.823159 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-01-10 14:40:20.823168 | orchestrator | Saturday 10 January 2026 14:35:41 +0000 (0:00:02.329) 0:02:04.577 ****** 2026-01-10 14:40:20.823176 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.823184 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.823192 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.823199 | orchestrator | 2026-01-10 14:40:20.823207 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-01-10 14:40:20.823215 | orchestrator | Saturday 10 January 2026 14:35:41 +0000 (0:00:00.530) 0:02:05.107 ****** 2026-01-10 14:40:20.823222 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.823230 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.823238 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.823246 | orchestrator | 2026-01-10 14:40:20.823253 | orchestrator | TASK [include_role : designate] ************************************************ 2026-01-10 14:40:20.823261 | orchestrator | Saturday 10 January 2026 14:35:42 +0000 (0:00:00.305) 0:02:05.413 ****** 2026-01-10 14:40:20.823269 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:40:20.823276 | orchestrator | 2026-01-10 14:40:20.823286 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-01-10 14:40:20.823300 | orchestrator | Saturday 10 January 2026 14:35:42 +0000 (0:00:00.729) 0:02:06.142 ****** 2026-01-10 14:40:20.823314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-10 14:40:20.823337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:40:20.823383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.823399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.823429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.823444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.823453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.823461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-10 14:40:20.823479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:40:20.823487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.823504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-10 14:40:20.823513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.823521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.823535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:40:20.823543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.823552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.823660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.823677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.823686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.823695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.823710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.823718 | orchestrator | 2026-01-10 14:40:20.823726 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-01-10 14:40:20.823734 | orchestrator | Saturday 10 January 2026 14:35:46 +0000 (0:00:03.382) 0:02:09.525 ****** 2026-01-10 14:40:20.823742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-10 14:40:20.823758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:40:20.823766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.823773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.823787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.823794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.823801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.823807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-10 14:40:20.823814 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.823828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:40:20.823836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.823848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.823855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.823862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.823869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.823883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-10 14:40:20.823890 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.823897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:40:20.823909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.823917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.823923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.823930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.823944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.823951 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.823958 | orchestrator | 2026-01-10 14:40:20.823965 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-01-10 14:40:20.823972 | orchestrator | Saturday 10 January 2026 14:35:47 +0000 (0:00:00.751) 0:02:10.276 ****** 2026-01-10 14:40:20.823979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-10 14:40:20.823991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-10 14:40:20.823998 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.824005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-10 14:40:20.824011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-10 14:40:20.824018 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.824025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-10 14:40:20.824031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-10 14:40:20.824038 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.824044 | orchestrator | 2026-01-10 14:40:20.824052 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-01-10 14:40:20.824058 | orchestrator | Saturday 10 January 2026 14:35:48 +0000 (0:00:00.899) 0:02:11.176 ****** 2026-01-10 14:40:20.824065 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:20.824072 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:40:20.824078 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:40:20.824084 | orchestrator | 2026-01-10 14:40:20.824091 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-01-10 14:40:20.824098 | orchestrator | Saturday 10 January 2026 14:35:49 +0000 (0:00:01.616) 0:02:12.792 ****** 2026-01-10 14:40:20.824104 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:20.824111 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:40:20.824118 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:40:20.824124 | orchestrator | 2026-01-10 14:40:20.824131 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-01-10 14:40:20.824138 | orchestrator | Saturday 10 January 2026 14:35:51 +0000 (0:00:01.925) 0:02:14.718 ****** 2026-01-10 14:40:20.824150 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.824163 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.824180 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.824191 | orchestrator | 2026-01-10 14:40:20.824202 | orchestrator | TASK [include_role : glance] *************************************************** 2026-01-10 14:40:20.824213 | orchestrator | Saturday 10 January 2026 14:35:52 +0000 (0:00:00.548) 0:02:15.266 ****** 2026-01-10 14:40:20.824223 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:40:20.824235 | orchestrator | 2026-01-10 14:40:20.824246 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-01-10 14:40:20.824257 | orchestrator | Saturday 10 January 2026 14:35:52 +0000 (0:00:00.835) 0:02:16.102 ****** 2026-01-10 14:40:20.824286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:40:20.824310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-10 14:40:20.824323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:40:20.824360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-10 14:40:20.824374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:40:20.824406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-10 14:40:20.824420 | orchestrator | 2026-01-10 14:40:20.824433 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-01-10 14:40:20.824446 | orchestrator | Saturday 10 January 2026 14:35:57 +0000 (0:00:04.184) 0:02:20.287 ****** 2026-01-10 14:40:20.824458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-10 14:40:20.824487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-10 14:40:20.824497 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.824505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-10 14:40:20.824540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-10 14:40:20.824554 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.824583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-10 14:40:20.824604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-10 14:40:20.824628 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.824643 | orchestrator | 2026-01-10 14:40:20.824653 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-01-10 14:40:20.824664 | orchestrator | Saturday 10 January 2026 14:36:00 +0000 (0:00:02.961) 0:02:23.248 ****** 2026-01-10 14:40:20.824675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-10 14:40:20.824686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-10 14:40:20.824697 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.824707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-10 14:40:20.824717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-10 14:40:20.824727 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.824737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-10 14:40:20.824755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-10 14:40:20.824767 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.824778 | orchestrator | 2026-01-10 14:40:20.824789 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-01-10 14:40:20.824801 | orchestrator | Saturday 10 January 2026 14:36:03 +0000 (0:00:03.533) 0:02:26.781 ****** 2026-01-10 14:40:20.824809 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:20.824815 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:40:20.824823 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:40:20.824829 | orchestrator | 2026-01-10 14:40:20.824836 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-01-10 14:40:20.824843 | orchestrator | Saturday 10 January 2026 14:36:04 +0000 (0:00:01.331) 0:02:28.113 ****** 2026-01-10 14:40:20.824849 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:20.824856 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:40:20.824867 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:40:20.824874 | orchestrator | 2026-01-10 14:40:20.824887 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-01-10 14:40:20.824894 | orchestrator | Saturday 10 January 2026 14:36:07 +0000 (0:00:02.125) 0:02:30.239 ****** 2026-01-10 14:40:20.824901 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.824908 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.824914 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.824921 | orchestrator | 2026-01-10 14:40:20.824927 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-01-10 14:40:20.824934 | orchestrator | Saturday 10 January 2026 14:36:07 +0000 (0:00:00.528) 0:02:30.767 ****** 2026-01-10 14:40:20.824941 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:40:20.824948 | orchestrator | 2026-01-10 14:40:20.824955 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-01-10 14:40:20.824962 | orchestrator | Saturday 10 January 2026 14:36:08 +0000 (0:00:00.865) 0:02:31.632 ****** 2026-01-10 14:40:20.824969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-10 14:40:20.824977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-10 14:40:20.824990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-10 14:40:20.824997 | orchestrator | 2026-01-10 14:40:20.825004 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-01-10 14:40:20.825010 | orchestrator | Saturday 10 January 2026 14:36:12 +0000 (0:00:03.592) 0:02:35.224 ****** 2026-01-10 14:40:20.825017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-10 14:40:20.825034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-10 14:40:20.825042 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.825048 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.825055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-10 14:40:20.825062 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.825069 | orchestrator | 2026-01-10 14:40:20.825076 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-01-10 14:40:20.825083 | orchestrator | Saturday 10 January 2026 14:36:13 +0000 (0:00:00.936) 0:02:36.161 ****** 2026-01-10 14:40:20.825090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-10 14:40:20.825098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-10 14:40:20.825111 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.825118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-10 14:40:20.825124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-10 14:40:20.825131 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.825138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-10 14:40:20.825144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-10 14:40:20.825152 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.825158 | orchestrator | 2026-01-10 14:40:20.825165 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-01-10 14:40:20.825172 | orchestrator | Saturday 10 January 2026 14:36:13 +0000 (0:00:00.747) 0:02:36.909 ****** 2026-01-10 14:40:20.825178 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:20.825186 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:40:20.825192 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:40:20.825199 | orchestrator | 2026-01-10 14:40:20.825205 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-01-10 14:40:20.825213 | orchestrator | Saturday 10 January 2026 14:36:15 +0000 (0:00:01.426) 0:02:38.335 ****** 2026-01-10 14:40:20.825220 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:20.825226 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:40:20.825233 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:40:20.825240 | orchestrator | 2026-01-10 14:40:20.825247 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-01-10 14:40:20.825254 | orchestrator | Saturday 10 January 2026 14:36:17 +0000 (0:00:02.331) 0:02:40.667 ****** 2026-01-10 14:40:20.825261 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.825269 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.825280 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.825295 | orchestrator | 2026-01-10 14:40:20.825310 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-01-10 14:40:20.825320 | orchestrator | Saturday 10 January 2026 14:36:18 +0000 (0:00:00.549) 0:02:41.217 ****** 2026-01-10 14:40:20.825332 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:40:20.825342 | orchestrator | 2026-01-10 14:40:20.825353 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-01-10 14:40:20.825364 | orchestrator | Saturday 10 January 2026 14:36:19 +0000 (0:00:01.164) 0:02:42.382 ****** 2026-01-10 14:40:20.825611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-10 14:40:20.825653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-10 14:40:20.825687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-10 14:40:20.825709 | orchestrator | 2026-01-10 14:40:20.825721 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-01-10 14:40:20.825733 | orchestrator | Saturday 10 January 2026 14:36:23 +0000 (0:00:04.207) 0:02:46.589 ****** 2026-01-10 14:40:20.825760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-10 14:40:20.825781 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.825795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-10 14:40:20.825808 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.825835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-10 14:40:20.825853 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.825861 | orchestrator | 2026-01-10 14:40:20.825868 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-01-10 14:40:20.825875 | orchestrator | Saturday 10 January 2026 14:36:24 +0000 (0:00:01.339) 0:02:47.929 ****** 2026-01-10 14:40:20.825883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-10 14:40:20.825892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-10 14:40:20.825901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-10 14:40:20.825910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-10 14:40:20.825918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-10 14:40:20.825926 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.825933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-10 14:40:20.825940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-10 14:40:20.825947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-10 14:40:20.825955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-10 14:40:20.825961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-10 14:40:20.825981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-10 14:40:20.825990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-10 14:40:20.825997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-10 14:40:20.826003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-10 14:40:20.826010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-10 14:40:20.826058 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.826066 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.826073 | orchestrator | 2026-01-10 14:40:20.826079 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-01-10 14:40:20.826086 | orchestrator | Saturday 10 January 2026 14:36:25 +0000 (0:00:01.211) 0:02:49.140 ****** 2026-01-10 14:40:20.826093 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:20.826099 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:40:20.826106 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:40:20.826113 | orchestrator | 2026-01-10 14:40:20.826119 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-01-10 14:40:20.826126 | orchestrator | Saturday 10 January 2026 14:36:27 +0000 (0:00:01.557) 0:02:50.698 ****** 2026-01-10 14:40:20.826133 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:20.826140 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:40:20.826147 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:40:20.826155 | orchestrator | 2026-01-10 14:40:20.826162 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-01-10 14:40:20.826170 | orchestrator | Saturday 10 January 2026 14:36:29 +0000 (0:00:02.048) 0:02:52.746 ****** 2026-01-10 14:40:20.826178 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.826185 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.826193 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.826200 | orchestrator | 2026-01-10 14:40:20.826208 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-01-10 14:40:20.826215 | orchestrator | Saturday 10 January 2026 14:36:29 +0000 (0:00:00.301) 0:02:53.047 ****** 2026-01-10 14:40:20.826223 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.826231 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.826238 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.826244 | orchestrator | 2026-01-10 14:40:20.826251 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-01-10 14:40:20.826257 | orchestrator | Saturday 10 January 2026 14:36:30 +0000 (0:00:00.594) 0:02:53.641 ****** 2026-01-10 14:40:20.826264 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:40:20.826271 | orchestrator | 2026-01-10 14:40:20.826277 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-01-10 14:40:20.826290 | orchestrator | Saturday 10 January 2026 14:36:31 +0000 (0:00:01.166) 0:02:54.808 ****** 2026-01-10 14:40:20.826298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-10 14:40:20.826337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:40:20.826351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:40:20.826365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-10 14:40:20.826377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:40:20.826397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:40:20.826420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-10 14:40:20.826435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:40:20.826442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:40:20.826449 | orchestrator | 2026-01-10 14:40:20.826457 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-01-10 14:40:20.826468 | orchestrator | Saturday 10 January 2026 14:36:35 +0000 (0:00:03.689) 0:02:58.497 ****** 2026-01-10 14:40:20.826480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-10 14:40:20.826502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:40:20.826514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:40:20.826526 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.826587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-10 14:40:20.826602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:40:20.826614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:40:20.826632 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.826645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-10 14:40:20.826658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:40:20.826686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:40:20.826698 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.826710 | orchestrator | 2026-01-10 14:40:20.826720 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-01-10 14:40:20.826755 | orchestrator | Saturday 10 January 2026 14:36:36 +0000 (0:00:01.018) 0:02:59.515 ****** 2026-01-10 14:40:20.826772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-10 14:40:20.826780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-10 14:40:20.826788 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.826795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-10 14:40:20.826802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-10 14:40:20.826815 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.826822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-10 14:40:20.826829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-10 14:40:20.826836 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.826846 | orchestrator | 2026-01-10 14:40:20.826859 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-01-10 14:40:20.826870 | orchestrator | Saturday 10 January 2026 14:36:37 +0000 (0:00:00.984) 0:03:00.500 ****** 2026-01-10 14:40:20.826881 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:20.826892 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:40:20.826904 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:40:20.826916 | orchestrator | 2026-01-10 14:40:20.826927 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-01-10 14:40:20.826938 | orchestrator | Saturday 10 January 2026 14:36:38 +0000 (0:00:01.242) 0:03:01.743 ****** 2026-01-10 14:40:20.826949 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:20.826961 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:40:20.826972 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:40:20.826983 | orchestrator | 2026-01-10 14:40:20.826994 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-01-10 14:40:20.827005 | orchestrator | Saturday 10 January 2026 14:36:40 +0000 (0:00:02.386) 0:03:04.129 ****** 2026-01-10 14:40:20.827016 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.827028 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.827039 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.827051 | orchestrator | 2026-01-10 14:40:20.827063 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-01-10 14:40:20.827074 | orchestrator | Saturday 10 January 2026 14:36:41 +0000 (0:00:00.586) 0:03:04.715 ****** 2026-01-10 14:40:20.827086 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:40:20.827099 | orchestrator | 2026-01-10 14:40:20.827111 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-01-10 14:40:20.827124 | orchestrator | Saturday 10 January 2026 14:36:42 +0000 (0:00:01.007) 0:03:05.723 ****** 2026-01-10 14:40:20.827146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:40:20.827155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.827170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:40:20.827178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:40:20.827185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.827201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.827208 | orchestrator | 2026-01-10 14:40:20.827215 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-01-10 14:40:20.827222 | orchestrator | Saturday 10 January 2026 14:36:46 +0000 (0:00:04.019) 0:03:09.743 ****** 2026-01-10 14:40:20.827230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-10 14:40:20.827243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.827251 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.827258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-10 14:40:20.827265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.827272 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.827288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-10 14:40:20.827301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.827308 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.827315 | orchestrator | 2026-01-10 14:40:20.827321 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-01-10 14:40:20.827329 | orchestrator | Saturday 10 January 2026 14:36:47 +0000 (0:00:01.348) 0:03:11.091 ****** 2026-01-10 14:40:20.827336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-10 14:40:20.827343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-10 14:40:20.827350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-10 14:40:20.827357 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.827364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-10 14:40:20.827372 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.827378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-10 14:40:20.827385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-10 14:40:20.827392 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.827398 | orchestrator | 2026-01-10 14:40:20.827405 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-01-10 14:40:20.827412 | orchestrator | Saturday 10 January 2026 14:36:49 +0000 (0:00:01.160) 0:03:12.252 ****** 2026-01-10 14:40:20.827419 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:20.827425 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:40:20.827432 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:40:20.827438 | orchestrator | 2026-01-10 14:40:20.827445 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-01-10 14:40:20.827452 | orchestrator | Saturday 10 January 2026 14:36:50 +0000 (0:00:01.371) 0:03:13.623 ****** 2026-01-10 14:40:20.827458 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:20.827465 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:40:20.827471 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:40:20.827478 | orchestrator | 2026-01-10 14:40:20.827485 | orchestrator | TASK [include_role : manila] *************************************************** 2026-01-10 14:40:20.827502 | orchestrator | Saturday 10 January 2026 14:36:52 +0000 (0:00:02.324) 0:03:15.947 ****** 2026-01-10 14:40:20.827513 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:40:20.827523 | orchestrator | 2026-01-10 14:40:20.827533 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-01-10 14:40:20.827545 | orchestrator | Saturday 10 January 2026 14:36:54 +0000 (0:00:01.329) 0:03:17.276 ****** 2026-01-10 14:40:20.827594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-10 14:40:20.827604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.827611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.827619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.827626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-10 14:40:20.827646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.827653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.827661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.827668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-10 14:40:20.827675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.827682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.827705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.827712 | orchestrator | 2026-01-10 14:40:20.827719 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-01-10 14:40:20.827726 | orchestrator | Saturday 10 January 2026 14:36:58 +0000 (0:00:04.073) 0:03:21.350 ****** 2026-01-10 14:40:20.827733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-10 14:40:20.827742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.827755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.827767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.827777 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.827804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-10 14:40:20.827828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.827840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.827851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.827862 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.827873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-10 14:40:20.827885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.827904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.827927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.827940 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.827951 | orchestrator | 2026-01-10 14:40:20.827962 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-01-10 14:40:20.827971 | orchestrator | Saturday 10 January 2026 14:36:58 +0000 (0:00:00.691) 0:03:22.041 ****** 2026-01-10 14:40:20.827978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-10 14:40:20.827985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-10 14:40:20.827992 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.827998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-10 14:40:20.828005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-10 14:40:20.828012 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.828019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-10 14:40:20.828026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-10 14:40:20.828033 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.828040 | orchestrator | 2026-01-10 14:40:20.828046 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-01-10 14:40:20.828053 | orchestrator | Saturday 10 January 2026 14:37:00 +0000 (0:00:01.189) 0:03:23.231 ****** 2026-01-10 14:40:20.828060 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:20.828067 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:40:20.828073 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:40:20.828080 | orchestrator | 2026-01-10 14:40:20.828086 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-01-10 14:40:20.828098 | orchestrator | Saturday 10 January 2026 14:37:01 +0000 (0:00:01.371) 0:03:24.602 ****** 2026-01-10 14:40:20.828105 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:20.828111 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:40:20.828118 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:40:20.828125 | orchestrator | 2026-01-10 14:40:20.828131 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-01-10 14:40:20.828138 | orchestrator | Saturday 10 January 2026 14:37:03 +0000 (0:00:02.467) 0:03:27.070 ****** 2026-01-10 14:40:20.828145 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:40:20.828152 | orchestrator | 2026-01-10 14:40:20.828158 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-01-10 14:40:20.828165 | orchestrator | Saturday 10 January 2026 14:37:05 +0000 (0:00:01.373) 0:03:28.443 ****** 2026-01-10 14:40:20.828172 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-10 14:40:20.828179 | orchestrator | 2026-01-10 14:40:20.828186 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-01-10 14:40:20.828193 | orchestrator | Saturday 10 January 2026 14:37:08 +0000 (0:00:03.153) 0:03:31.596 ****** 2026-01-10 14:40:20.828211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:40:20.828220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-10 14:40:20.828227 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.828235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:40:20.828249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-10 14:40:20.828256 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.828272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:40:20.828286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-10 14:40:20.828293 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.828300 | orchestrator | 2026-01-10 14:40:20.828307 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-01-10 14:40:20.828313 | orchestrator | Saturday 10 January 2026 14:37:10 +0000 (0:00:02.240) 0:03:33.837 ****** 2026-01-10 14:40:20.828329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:40:20.828337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-10 14:40:20.828344 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.828351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:40:20.828363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-10 14:40:20.828370 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.828383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:40:20.828416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-10 14:40:20.828424 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.828430 | orchestrator | 2026-01-10 14:40:20.828437 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-01-10 14:40:20.828444 | orchestrator | Saturday 10 January 2026 14:37:13 +0000 (0:00:02.437) 0:03:36.275 ****** 2026-01-10 14:40:20.828451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-10 14:40:20.828458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-10 14:40:20.828465 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.828472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-10 14:40:20.828487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-10 14:40:20.828495 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.828502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-10 14:40:20.828517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-10 14:40:20.828525 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.828536 | orchestrator | 2026-01-10 14:40:20.828547 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-01-10 14:40:20.828557 | orchestrator | Saturday 10 January 2026 14:37:15 +0000 (0:00:02.551) 0:03:38.827 ****** 2026-01-10 14:40:20.828742 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:20.828754 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:40:20.828760 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:40:20.828767 | orchestrator | 2026-01-10 14:40:20.828774 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-01-10 14:40:20.828781 | orchestrator | Saturday 10 January 2026 14:37:17 +0000 (0:00:01.794) 0:03:40.621 ****** 2026-01-10 14:40:20.828787 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.828794 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.828800 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.828807 | orchestrator | 2026-01-10 14:40:20.828814 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-01-10 14:40:20.828820 | orchestrator | Saturday 10 January 2026 14:37:18 +0000 (0:00:01.404) 0:03:42.026 ****** 2026-01-10 14:40:20.828827 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.828833 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.828840 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.828846 | orchestrator | 2026-01-10 14:40:20.828853 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-01-10 14:40:20.828859 | orchestrator | Saturday 10 January 2026 14:37:19 +0000 (0:00:00.360) 0:03:42.386 ****** 2026-01-10 14:40:20.828866 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:40:20.828873 | orchestrator | 2026-01-10 14:40:20.828879 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-01-10 14:40:20.828886 | orchestrator | Saturday 10 January 2026 14:37:20 +0000 (0:00:01.349) 0:03:43.736 ****** 2026-01-10 14:40:20.828893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-10 14:40:20.828919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-10 14:40:20.828935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-10 14:40:20.828942 | orchestrator | 2026-01-10 14:40:20.828949 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-01-10 14:40:20.828955 | orchestrator | Saturday 10 January 2026 14:37:22 +0000 (0:00:01.594) 0:03:45.331 ****** 2026-01-10 14:40:20.828963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-10 14:40:20.828969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-10 14:40:20.828976 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.828983 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.828990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-10 14:40:20.828997 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.829003 | orchestrator | 2026-01-10 14:40:20.829010 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-01-10 14:40:20.829016 | orchestrator | Saturday 10 January 2026 14:37:22 +0000 (0:00:00.399) 0:03:45.731 ****** 2026-01-10 14:40:20.829028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-10 14:40:20.829231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-10 14:40:20.829245 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.829252 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.829258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-10 14:40:20.829265 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.829271 | orchestrator | 2026-01-10 14:40:20.829277 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-01-10 14:40:20.829283 | orchestrator | Saturday 10 January 2026 14:37:23 +0000 (0:00:00.906) 0:03:46.637 ****** 2026-01-10 14:40:20.829290 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.829296 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.829302 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.829308 | orchestrator | 2026-01-10 14:40:20.829314 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-01-10 14:40:20.829320 | orchestrator | Saturday 10 January 2026 14:37:23 +0000 (0:00:00.487) 0:03:47.124 ****** 2026-01-10 14:40:20.829326 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.829332 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.829338 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.829344 | orchestrator | 2026-01-10 14:40:20.829351 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-01-10 14:40:20.829357 | orchestrator | Saturday 10 January 2026 14:37:25 +0000 (0:00:01.368) 0:03:48.493 ****** 2026-01-10 14:40:20.829363 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.829369 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.829375 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.829381 | orchestrator | 2026-01-10 14:40:20.829387 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-01-10 14:40:20.829393 | orchestrator | Saturday 10 January 2026 14:37:25 +0000 (0:00:00.348) 0:03:48.842 ****** 2026-01-10 14:40:20.829399 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:40:20.829405 | orchestrator | 2026-01-10 14:40:20.829412 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-01-10 14:40:20.829418 | orchestrator | Saturday 10 January 2026 14:37:27 +0000 (0:00:01.497) 0:03:50.339 ****** 2026-01-10 14:40:20.829424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:40:20.829439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.829492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.829503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.829510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-10 14:40:20.829516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.829524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-10 14:40:20.829536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-10 14:40:20.829542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.829629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:40:20.829641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.829648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-10 14:40:20.829655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-10 14:40:20.829667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:40:20.829674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.829727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.829738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-10 14:40:20.829746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.829757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:40:20.829763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.829801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-10 14:40:20.829810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.829817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-10 14:40:20.829824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-10 14:40:20.829835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.829842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:40:20.829916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.829927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:40:20.829934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-10 14:40:20.829940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-10 14:40:20.829955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.829962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.830037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.830051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-10 14:40:20.830058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:40:20.830071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.830078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-10 14:40:20.830131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.830141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-10 14:40:20.830148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-10 14:40:20.830155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.830167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:40:20.830174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.830191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-10 14:40:20.830244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-10 14:40:20.830254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.830260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-10 14:40:20.830275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:40:20.830282 | orchestrator | 2026-01-10 14:40:20.830288 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-01-10 14:40:20.830295 | orchestrator | Saturday 10 January 2026 14:37:31 +0000 (0:00:04.375) 0:03:54.715 ****** 2026-01-10 14:40:20.830301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:40:20.830351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.830361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.830368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.830380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-10 14:40:20.830386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.830393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-10 14:40:20.830448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:40:20.830460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-10 14:40:20.830467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.830479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.830486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.830493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:40:20.830582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.830594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.830607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-10 14:40:20.830613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-10 14:40:20.830620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.830627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-10 14:40:20.830693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-10 14:40:20.830704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.830716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-10 14:40:20.830723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:40:20.830730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-10 14:40:20.830737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.830791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:40:20.830802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:40:20.830814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.830821 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.830827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.830834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.830840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-10 14:40:20.830905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.830921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-10 14:40:20.830937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-10 14:40:20.830944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.830950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.831002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-10 14:40:20.831018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-10 14:40:20.831025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:40:20.831032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-10 14:40:20.831038 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.831044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.831051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:40:20.831058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.831088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-10 14:40:20.831101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-10 14:40:20.831107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.831114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-10 14:40:20.831131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:40:20.831138 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.831144 | orchestrator | 2026-01-10 14:40:20.831150 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-01-10 14:40:20.831157 | orchestrator | Saturday 10 January 2026 14:37:33 +0000 (0:00:01.577) 0:03:56.292 ****** 2026-01-10 14:40:20.831164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-10 14:40:20.831170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-10 14:40:20.831183 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.831212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-10 14:40:20.831220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-10 14:40:20.831227 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.831233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-10 14:40:20.831239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-10 14:40:20.831246 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.831252 | orchestrator | 2026-01-10 14:40:20.831258 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-01-10 14:40:20.831264 | orchestrator | Saturday 10 January 2026 14:37:35 +0000 (0:00:02.212) 0:03:58.505 ****** 2026-01-10 14:40:20.831271 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:20.831285 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:40:20.831292 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:40:20.831298 | orchestrator | 2026-01-10 14:40:20.831305 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-01-10 14:40:20.831311 | orchestrator | Saturday 10 January 2026 14:37:36 +0000 (0:00:01.473) 0:03:59.979 ****** 2026-01-10 14:40:20.831317 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:20.831323 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:40:20.831329 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:40:20.831335 | orchestrator | 2026-01-10 14:40:20.831342 | orchestrator | TASK [include_role : placement] ************************************************ 2026-01-10 14:40:20.831348 | orchestrator | Saturday 10 January 2026 14:37:39 +0000 (0:00:02.299) 0:04:02.278 ****** 2026-01-10 14:40:20.831354 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:40:20.831360 | orchestrator | 2026-01-10 14:40:20.831366 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-01-10 14:40:20.831372 | orchestrator | Saturday 10 January 2026 14:37:40 +0000 (0:00:01.232) 0:04:03.510 ****** 2026-01-10 14:40:20.831379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-10 14:40:20.831386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-10 14:40:20.831421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-10 14:40:20.831429 | orchestrator | 2026-01-10 14:40:20.831436 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-01-10 14:40:20.831443 | orchestrator | Saturday 10 January 2026 14:37:43 +0000 (0:00:03.640) 0:04:07.151 ****** 2026-01-10 14:40:20.831449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-10 14:40:20.831455 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.831462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-10 14:40:20.831468 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.831475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-10 14:40:20.831486 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.831492 | orchestrator | 2026-01-10 14:40:20.831498 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-01-10 14:40:20.831504 | orchestrator | Saturday 10 January 2026 14:37:44 +0000 (0:00:00.543) 0:04:07.695 ****** 2026-01-10 14:40:20.831511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-10 14:40:20.831517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-10 14:40:20.831525 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.831553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-10 14:40:20.831631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-10 14:40:20.831640 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.831647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-10 14:40:20.831655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-10 14:40:20.831662 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.831669 | orchestrator | 2026-01-10 14:40:20.831676 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-01-10 14:40:20.831683 | orchestrator | Saturday 10 January 2026 14:37:45 +0000 (0:00:00.758) 0:04:08.453 ****** 2026-01-10 14:40:20.831691 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:20.831697 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:40:20.831705 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:40:20.831711 | orchestrator | 2026-01-10 14:40:20.831718 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-01-10 14:40:20.831725 | orchestrator | Saturday 10 January 2026 14:37:46 +0000 (0:00:01.331) 0:04:09.785 ****** 2026-01-10 14:40:20.831732 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:20.831739 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:40:20.831746 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:40:20.831753 | orchestrator | 2026-01-10 14:40:20.831760 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-01-10 14:40:20.831767 | orchestrator | Saturday 10 January 2026 14:37:48 +0000 (0:00:02.184) 0:04:11.969 ****** 2026-01-10 14:40:20.831774 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:40:20.831781 | orchestrator | 2026-01-10 14:40:20.831788 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-01-10 14:40:20.831801 | orchestrator | Saturday 10 January 2026 14:37:50 +0000 (0:00:01.578) 0:04:13.548 ****** 2026-01-10 14:40:20.831809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-10 14:40:20.831817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.831854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.831864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-10 14:40:20.831872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.831884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.831892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-10 14:40:20.831922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.831931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.831938 | orchestrator | 2026-01-10 14:40:20.831945 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-01-10 14:40:20.831952 | orchestrator | Saturday 10 January 2026 14:37:54 +0000 (0:00:04.566) 0:04:18.114 ****** 2026-01-10 14:40:20.831959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-10 14:40:20.831971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.831978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.831985 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.832015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-10 14:40:20.832024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.832045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.832051 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.832058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-10 14:40:20.832065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.832094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.832102 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.832109 | orchestrator | 2026-01-10 14:40:20.832115 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-01-10 14:40:20.832122 | orchestrator | Saturday 10 January 2026 14:37:56 +0000 (0:00:01.564) 0:04:19.678 ****** 2026-01-10 14:40:20.832128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-10 14:40:20.832140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-10 14:40:20.832147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-10 14:40:20.832153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-10 14:40:20.832160 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.832166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-10 14:40:20.832172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-10 14:40:20.832178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-10 14:40:20.832185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-10 14:40:20.832191 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.832198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-10 14:40:20.832204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-10 14:40:20.832211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-10 14:40:20.832217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-10 14:40:20.832223 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.832229 | orchestrator | 2026-01-10 14:40:20.832236 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-01-10 14:40:20.832242 | orchestrator | Saturday 10 January 2026 14:37:57 +0000 (0:00:00.967) 0:04:20.646 ****** 2026-01-10 14:40:20.832248 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:20.832254 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:40:20.832260 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:40:20.832267 | orchestrator | 2026-01-10 14:40:20.832273 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-01-10 14:40:20.832279 | orchestrator | Saturday 10 January 2026 14:37:59 +0000 (0:00:01.585) 0:04:22.231 ****** 2026-01-10 14:40:20.832286 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:20.832292 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:40:20.832299 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:40:20.832305 | orchestrator | 2026-01-10 14:40:20.832332 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-01-10 14:40:20.832340 | orchestrator | Saturday 10 January 2026 14:38:01 +0000 (0:00:02.240) 0:04:24.472 ****** 2026-01-10 14:40:20.832352 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:40:20.832358 | orchestrator | 2026-01-10 14:40:20.832364 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-01-10 14:40:20.832370 | orchestrator | Saturday 10 January 2026 14:38:03 +0000 (0:00:01.701) 0:04:26.173 ****** 2026-01-10 14:40:20.832377 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-01-10 14:40:20.832383 | orchestrator | 2026-01-10 14:40:20.832389 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-01-10 14:40:20.832395 | orchestrator | Saturday 10 January 2026 14:38:03 +0000 (0:00:00.882) 0:04:27.056 ****** 2026-01-10 14:40:20.832402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-10 14:40:20.832409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-10 14:40:20.832416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-10 14:40:20.832422 | orchestrator | 2026-01-10 14:40:20.832429 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-01-10 14:40:20.832437 | orchestrator | Saturday 10 January 2026 14:38:08 +0000 (0:00:04.598) 0:04:31.654 ****** 2026-01-10 14:40:20.832527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-10 14:40:20.832557 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.832590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-10 14:40:20.832599 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.832610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-10 14:40:20.832628 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.832638 | orchestrator | 2026-01-10 14:40:20.832694 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-01-10 14:40:20.832707 | orchestrator | Saturday 10 January 2026 14:38:09 +0000 (0:00:01.399) 0:04:33.054 ****** 2026-01-10 14:40:20.832717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-10 14:40:20.832727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-10 14:40:20.832737 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.832748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-10 14:40:20.832758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-10 14:40:20.832769 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.832780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-10 14:40:20.832788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-10 14:40:20.832795 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.832801 | orchestrator | 2026-01-10 14:40:20.832807 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-10 14:40:20.832813 | orchestrator | Saturday 10 January 2026 14:38:11 +0000 (0:00:01.530) 0:04:34.584 ****** 2026-01-10 14:40:20.832819 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:20.832826 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:40:20.832832 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:40:20.832838 | orchestrator | 2026-01-10 14:40:20.832844 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-10 14:40:20.832850 | orchestrator | Saturday 10 January 2026 14:38:14 +0000 (0:00:02.814) 0:04:37.398 ****** 2026-01-10 14:40:20.832856 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:20.832862 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:40:20.832868 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:40:20.832874 | orchestrator | 2026-01-10 14:40:20.832880 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-01-10 14:40:20.832886 | orchestrator | Saturday 10 January 2026 14:38:17 +0000 (0:00:03.034) 0:04:40.433 ****** 2026-01-10 14:40:20.832893 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-01-10 14:40:20.832899 | orchestrator | 2026-01-10 14:40:20.832906 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-01-10 14:40:20.832912 | orchestrator | Saturday 10 January 2026 14:38:18 +0000 (0:00:01.455) 0:04:41.888 ****** 2026-01-10 14:40:20.832924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-10 14:40:20.832931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-10 14:40:20.832938 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.832944 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.832979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-10 14:40:20.832987 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.832993 | orchestrator | 2026-01-10 14:40:20.832999 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-01-10 14:40:20.833006 | orchestrator | Saturday 10 January 2026 14:38:20 +0000 (0:00:01.314) 0:04:43.203 ****** 2026-01-10 14:40:20.833012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-10 14:40:20.833019 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.833025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-10 14:40:20.833031 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.833038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-10 14:40:20.833044 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.833055 | orchestrator | 2026-01-10 14:40:20.833062 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-01-10 14:40:20.833068 | orchestrator | Saturday 10 January 2026 14:38:21 +0000 (0:00:01.350) 0:04:44.553 ****** 2026-01-10 14:40:20.833075 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.833081 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.833087 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.833093 | orchestrator | 2026-01-10 14:40:20.833099 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-10 14:40:20.833105 | orchestrator | Saturday 10 January 2026 14:38:23 +0000 (0:00:01.868) 0:04:46.421 ****** 2026-01-10 14:40:20.833111 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:40:20.833118 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:40:20.833124 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:40:20.833130 | orchestrator | 2026-01-10 14:40:20.833137 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-10 14:40:20.833143 | orchestrator | Saturday 10 January 2026 14:38:25 +0000 (0:00:02.481) 0:04:48.903 ****** 2026-01-10 14:40:20.833149 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:40:20.833155 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:40:20.833161 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:40:20.833167 | orchestrator | 2026-01-10 14:40:20.833173 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-01-10 14:40:20.833180 | orchestrator | Saturday 10 January 2026 14:38:28 +0000 (0:00:03.138) 0:04:52.041 ****** 2026-01-10 14:40:20.833186 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-01-10 14:40:20.833192 | orchestrator | 2026-01-10 14:40:20.833199 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-01-10 14:40:20.833205 | orchestrator | Saturday 10 January 2026 14:38:29 +0000 (0:00:00.876) 0:04:52.918 ****** 2026-01-10 14:40:20.833235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-10 14:40:20.833243 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.833250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-10 14:40:20.833256 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.833263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-10 14:40:20.833269 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.833275 | orchestrator | 2026-01-10 14:40:20.833281 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-01-10 14:40:20.833288 | orchestrator | Saturday 10 January 2026 14:38:31 +0000 (0:00:01.396) 0:04:54.315 ****** 2026-01-10 14:40:20.833299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-10 14:40:20.833306 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.833312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-10 14:40:20.833319 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.833325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-10 14:40:20.833331 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.833337 | orchestrator | 2026-01-10 14:40:20.833343 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-01-10 14:40:20.833350 | orchestrator | Saturday 10 January 2026 14:38:32 +0000 (0:00:01.457) 0:04:55.772 ****** 2026-01-10 14:40:20.833356 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.833362 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.833368 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.833374 | orchestrator | 2026-01-10 14:40:20.833380 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-10 14:40:20.833386 | orchestrator | Saturday 10 January 2026 14:38:34 +0000 (0:00:01.551) 0:04:57.324 ****** 2026-01-10 14:40:20.833392 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:40:20.833399 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:40:20.833405 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:40:20.833411 | orchestrator | 2026-01-10 14:40:20.833417 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-10 14:40:20.833423 | orchestrator | Saturday 10 January 2026 14:38:36 +0000 (0:00:02.568) 0:04:59.892 ****** 2026-01-10 14:40:20.833430 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:40:20.833436 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:40:20.833442 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:40:20.833448 | orchestrator | 2026-01-10 14:40:20.833454 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-01-10 14:40:20.833460 | orchestrator | Saturday 10 January 2026 14:38:40 +0000 (0:00:03.470) 0:05:03.363 ****** 2026-01-10 14:40:20.833490 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:40:20.833498 | orchestrator | 2026-01-10 14:40:20.833504 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-01-10 14:40:20.833510 | orchestrator | Saturday 10 January 2026 14:38:41 +0000 (0:00:01.606) 0:05:04.969 ****** 2026-01-10 14:40:20.833517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:40:20.833529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:40:20.833536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:40:20.833543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:40:20.833549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:40:20.833601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:40:20.833616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:40:20.833624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:40:20.833630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:40:20.833637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.833644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.833672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:40:20.833685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:40:20.833692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:40:20.833698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.833705 | orchestrator | 2026-01-10 14:40:20.833711 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-01-10 14:40:20.833717 | orchestrator | Saturday 10 January 2026 14:38:45 +0000 (0:00:03.486) 0:05:08.456 ****** 2026-01-10 14:40:20.833724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-10 14:40:20.833730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:40:20.833761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:40:20.833774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:40:20.833781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.833787 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.833794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-10 14:40:20.833801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:40:20.833807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:40:20.833834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:40:20.833846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.833852 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.833859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-10 14:40:20.833865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:40:20.833872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:40:20.833878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:40:20.833908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:40:20.833916 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.833923 | orchestrator | 2026-01-10 14:40:20.833929 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-01-10 14:40:20.833935 | orchestrator | Saturday 10 January 2026 14:38:46 +0000 (0:00:00.752) 0:05:09.208 ****** 2026-01-10 14:40:20.833942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-10 14:40:20.833948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-10 14:40:20.833954 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.833961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-10 14:40:20.833967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-10 14:40:20.833973 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.833980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-10 14:40:20.833986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-10 14:40:20.833992 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.833998 | orchestrator | 2026-01-10 14:40:20.834005 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-01-10 14:40:20.834035 | orchestrator | Saturday 10 January 2026 14:38:47 +0000 (0:00:01.519) 0:05:10.727 ****** 2026-01-10 14:40:20.834044 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:40:20.834050 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:20.834057 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:40:20.834063 | orchestrator | 2026-01-10 14:40:20.834070 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-01-10 14:40:20.834076 | orchestrator | Saturday 10 January 2026 14:38:49 +0000 (0:00:01.479) 0:05:12.207 ****** 2026-01-10 14:40:20.834082 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:20.834088 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:40:20.834094 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:40:20.834100 | orchestrator | 2026-01-10 14:40:20.834106 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-01-10 14:40:20.834113 | orchestrator | Saturday 10 January 2026 14:38:51 +0000 (0:00:02.228) 0:05:14.436 ****** 2026-01-10 14:40:20.834119 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:40:20.834125 | orchestrator | 2026-01-10 14:40:20.834131 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-01-10 14:40:20.834137 | orchestrator | Saturday 10 January 2026 14:38:52 +0000 (0:00:01.398) 0:05:15.834 ****** 2026-01-10 14:40:20.834149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-10 14:40:20.834183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-10 14:40:20.834191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-10 14:40:20.834199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-10 14:40:20.834206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-10 14:40:20.834241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-10 14:40:20.834250 | orchestrator | 2026-01-10 14:40:20.834256 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-01-10 14:40:20.834263 | orchestrator | Saturday 10 January 2026 14:38:58 +0000 (0:00:05.743) 0:05:21.578 ****** 2026-01-10 14:40:20.834269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-10 14:40:20.834276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-10 14:40:20.834288 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.834295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-10 14:40:20.834322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-10 14:40:20.834331 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.834338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-10 14:40:20.834345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-10 14:40:20.834356 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.834362 | orchestrator | 2026-01-10 14:40:20.834369 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-01-10 14:40:20.834375 | orchestrator | Saturday 10 January 2026 14:38:59 +0000 (0:00:00.707) 0:05:22.285 ****** 2026-01-10 14:40:20.834381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-10 14:40:20.834388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-10 14:40:20.834394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-10 14:40:20.834401 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.834407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-10 14:40:20.834413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-10 14:40:20.834420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-10 14:40:20.834426 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.834432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-10 14:40:20.834461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-10 14:40:20.834469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-10 14:40:20.834475 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.834481 | orchestrator | 2026-01-10 14:40:20.834487 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-01-10 14:40:20.834493 | orchestrator | Saturday 10 January 2026 14:39:00 +0000 (0:00:00.939) 0:05:23.224 ****** 2026-01-10 14:40:20.834499 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.834506 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.834512 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.834518 | orchestrator | 2026-01-10 14:40:20.834524 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-01-10 14:40:20.834530 | orchestrator | Saturday 10 January 2026 14:39:00 +0000 (0:00:00.825) 0:05:24.050 ****** 2026-01-10 14:40:20.834536 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.834543 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.834549 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.834555 | orchestrator | 2026-01-10 14:40:20.834575 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-01-10 14:40:20.834582 | orchestrator | Saturday 10 January 2026 14:39:02 +0000 (0:00:01.440) 0:05:25.490 ****** 2026-01-10 14:40:20.834588 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:40:20.834598 | orchestrator | 2026-01-10 14:40:20.834604 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-01-10 14:40:20.834611 | orchestrator | Saturday 10 January 2026 14:39:03 +0000 (0:00:01.462) 0:05:26.953 ****** 2026-01-10 14:40:20.834617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-10 14:40:20.834624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:40:20.834631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:40:20.834638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:40:20.834673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:40:20.834681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-10 14:40:20.834692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:40:20.834699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:40:20.834705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:40:20.834712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:40:20.834718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-10 14:40:20.834747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:40:20.834755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:40:20.834766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:40:20.834772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:40:20.834779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-10 14:40:20.834787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-10 14:40:20.834801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:40:20.834808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:40:20.834819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-10 14:40:20.834825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-10 14:40:20.834832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-10 14:40:20.834839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:40:20.834855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:40:20.834862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-10 14:40:20.834873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-10 14:40:20.834880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-10 14:40:20.834891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:40:20.834897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:40:20.834904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-10 14:40:20.834910 | orchestrator | 2026-01-10 14:40:20.834923 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-01-10 14:40:20.834930 | orchestrator | Saturday 10 January 2026 14:39:08 +0000 (0:00:04.585) 0:05:31.539 ****** 2026-01-10 14:40:20.834944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-10 14:40:20.834951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:40:20.834957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:40:20.834964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:40:20.834970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:40:20.834977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-10 14:40:20.834995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-10 14:40:20.835002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:40:20.835008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-10 14:40:20.835015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:40:20.835021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:40:20.835028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-10 14:40:20.835034 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.835047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:40:20.835058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:40:20.835064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:40:20.835071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-10 14:40:20.835078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-10 14:40:20.835085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-10 14:40:20.835104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:40:20.835111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:40:20.835117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:40:20.835124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:40:20.835130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-10 14:40:20.835137 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.835144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:40:20.835150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:40:20.835168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-10 14:40:20.835176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-10 14:40:20.835183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:40:20.835189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:40:20.835196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-10 14:40:20.835202 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.835208 | orchestrator | 2026-01-10 14:40:20.835215 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-01-10 14:40:20.835221 | orchestrator | Saturday 10 January 2026 14:39:09 +0000 (0:00:01.280) 0:05:32.819 ****** 2026-01-10 14:40:20.835227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-10 14:40:20.835238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-10 14:40:20.835245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-10 14:40:20.835252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-10 14:40:20.835261 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.835272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-10 14:40:20.835279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-10 14:40:20.835285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-10 14:40:20.835292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-10 14:40:20.835299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-10 14:40:20.835305 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.835311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-10 14:40:20.835318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-10 14:40:20.835324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-10 14:40:20.835331 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.835337 | orchestrator | 2026-01-10 14:40:20.835343 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-01-10 14:40:20.835349 | orchestrator | Saturday 10 January 2026 14:39:10 +0000 (0:00:01.054) 0:05:33.874 ****** 2026-01-10 14:40:20.835356 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.835362 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.835368 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.835374 | orchestrator | 2026-01-10 14:40:20.835380 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-01-10 14:40:20.835391 | orchestrator | Saturday 10 January 2026 14:39:11 +0000 (0:00:00.536) 0:05:34.410 ****** 2026-01-10 14:40:20.835398 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.835404 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.835410 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.835416 | orchestrator | 2026-01-10 14:40:20.835422 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-01-10 14:40:20.835428 | orchestrator | Saturday 10 January 2026 14:39:12 +0000 (0:00:01.442) 0:05:35.853 ****** 2026-01-10 14:40:20.835435 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:40:20.835441 | orchestrator | 2026-01-10 14:40:20.835447 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-01-10 14:40:20.835453 | orchestrator | Saturday 10 January 2026 14:39:14 +0000 (0:00:01.798) 0:05:37.651 ****** 2026-01-10 14:40:20.835466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-10 14:40:20.835474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-10 14:40:20.835481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-10 14:40:20.835488 | orchestrator | 2026-01-10 14:40:20.835494 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-01-10 14:40:20.835505 | orchestrator | Saturday 10 January 2026 14:39:17 +0000 (0:00:02.701) 0:05:40.353 ****** 2026-01-10 14:40:20.835511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-10 14:40:20.835518 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.835530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-10 14:40:20.835538 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.835545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-10 14:40:20.835552 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.835571 | orchestrator | 2026-01-10 14:40:20.835577 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-01-10 14:40:20.835583 | orchestrator | Saturday 10 January 2026 14:39:17 +0000 (0:00:00.413) 0:05:40.766 ****** 2026-01-10 14:40:20.835592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-10 14:40:20.835602 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.835611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-10 14:40:20.835628 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.835637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-10 14:40:20.835646 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.835655 | orchestrator | 2026-01-10 14:40:20.835665 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-01-10 14:40:20.835674 | orchestrator | Saturday 10 January 2026 14:39:18 +0000 (0:00:01.059) 0:05:41.826 ****** 2026-01-10 14:40:20.835683 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.835696 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.835706 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.835715 | orchestrator | 2026-01-10 14:40:20.835726 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-01-10 14:40:20.835736 | orchestrator | Saturday 10 January 2026 14:39:19 +0000 (0:00:00.440) 0:05:42.266 ****** 2026-01-10 14:40:20.835746 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.835756 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.835765 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.835774 | orchestrator | 2026-01-10 14:40:20.835782 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-01-10 14:40:20.835791 | orchestrator | Saturday 10 January 2026 14:39:20 +0000 (0:00:01.380) 0:05:43.647 ****** 2026-01-10 14:40:20.835802 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:40:20.835811 | orchestrator | 2026-01-10 14:40:20.835821 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-01-10 14:40:20.835832 | orchestrator | Saturday 10 January 2026 14:39:22 +0000 (0:00:01.834) 0:05:45.481 ****** 2026-01-10 14:40:20.835843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-10 14:40:20.835872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-10 14:40:20.835883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-10 14:40:20.835901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-10 14:40:20.835914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-10 14:40:20.835935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-10 14:40:20.835946 | orchestrator | 2026-01-10 14:40:20.835956 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-01-10 14:40:20.835967 | orchestrator | Saturday 10 January 2026 14:39:28 +0000 (0:00:06.314) 0:05:51.796 ****** 2026-01-10 14:40:20.835976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-10 14:40:20.835989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-10 14:40:20.835996 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.836002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-10 14:40:20.836015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-10 14:40:20.836022 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.836029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-10 14:40:20.836043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-10 14:40:20.836049 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.836056 | orchestrator | 2026-01-10 14:40:20.836062 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-01-10 14:40:20.836068 | orchestrator | Saturday 10 January 2026 14:39:29 +0000 (0:00:00.681) 0:05:52.478 ****** 2026-01-10 14:40:20.836075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-10 14:40:20.836081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-10 14:40:20.836089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-10 14:40:20.836095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-10 14:40:20.836102 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.836108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-10 14:40:20.836114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-10 14:40:20.836120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-10 14:40:20.836133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-10 14:40:20.836140 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.836151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-10 14:40:20.836157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-10 14:40:20.836164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-10 14:40:20.836170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-10 14:40:20.836176 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.836182 | orchestrator | 2026-01-10 14:40:20.836188 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-01-10 14:40:20.836194 | orchestrator | Saturday 10 January 2026 14:39:30 +0000 (0:00:01.638) 0:05:54.117 ****** 2026-01-10 14:40:20.836201 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:20.836207 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:40:20.836213 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:40:20.836219 | orchestrator | 2026-01-10 14:40:20.836225 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-01-10 14:40:20.836231 | orchestrator | Saturday 10 January 2026 14:39:32 +0000 (0:00:01.489) 0:05:55.607 ****** 2026-01-10 14:40:20.836238 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:20.836245 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:40:20.836252 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:40:20.836281 | orchestrator | 2026-01-10 14:40:20.836289 | orchestrator | TASK [include_role : swift] **************************************************** 2026-01-10 14:40:20.836296 | orchestrator | Saturday 10 January 2026 14:39:34 +0000 (0:00:02.266) 0:05:57.873 ****** 2026-01-10 14:40:20.836304 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.836311 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.836318 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.836325 | orchestrator | 2026-01-10 14:40:20.836333 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-01-10 14:40:20.836340 | orchestrator | Saturday 10 January 2026 14:39:35 +0000 (0:00:00.349) 0:05:58.222 ****** 2026-01-10 14:40:20.836347 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.836354 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.836362 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.836369 | orchestrator | 2026-01-10 14:40:20.836376 | orchestrator | TASK [include_role : trove] **************************************************** 2026-01-10 14:40:20.836383 | orchestrator | Saturday 10 January 2026 14:39:35 +0000 (0:00:00.360) 0:05:58.583 ****** 2026-01-10 14:40:20.836391 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.836398 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.836405 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.836412 | orchestrator | 2026-01-10 14:40:20.836420 | orchestrator | TASK [include_role : venus] **************************************************** 2026-01-10 14:40:20.836427 | orchestrator | Saturday 10 January 2026 14:39:36 +0000 (0:00:00.742) 0:05:59.325 ****** 2026-01-10 14:40:20.836434 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.836441 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.836449 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.836456 | orchestrator | 2026-01-10 14:40:20.836463 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-01-10 14:40:20.836470 | orchestrator | Saturday 10 January 2026 14:39:36 +0000 (0:00:00.324) 0:05:59.649 ****** 2026-01-10 14:40:20.836478 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.836490 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.836497 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.836504 | orchestrator | 2026-01-10 14:40:20.836511 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-01-10 14:40:20.836519 | orchestrator | Saturday 10 January 2026 14:39:36 +0000 (0:00:00.329) 0:05:59.979 ****** 2026-01-10 14:40:20.836526 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.836533 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.836541 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.836548 | orchestrator | 2026-01-10 14:40:20.836555 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-01-10 14:40:20.836585 | orchestrator | Saturday 10 January 2026 14:39:37 +0000 (0:00:00.832) 0:06:00.812 ****** 2026-01-10 14:40:20.836593 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:40:20.836601 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:40:20.836608 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:40:20.836615 | orchestrator | 2026-01-10 14:40:20.836623 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-01-10 14:40:20.836630 | orchestrator | Saturday 10 January 2026 14:39:38 +0000 (0:00:00.821) 0:06:01.633 ****** 2026-01-10 14:40:20.836637 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:40:20.836644 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:40:20.836652 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:40:20.836659 | orchestrator | 2026-01-10 14:40:20.836666 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-01-10 14:40:20.836673 | orchestrator | Saturday 10 January 2026 14:39:38 +0000 (0:00:00.355) 0:06:01.989 ****** 2026-01-10 14:40:20.836681 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:40:20.836688 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:40:20.836695 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:40:20.836702 | orchestrator | 2026-01-10 14:40:20.836717 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-01-10 14:40:20.836725 | orchestrator | Saturday 10 January 2026 14:39:39 +0000 (0:00:00.911) 0:06:02.901 ****** 2026-01-10 14:40:20.836732 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:40:20.836739 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:40:20.836746 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:40:20.836753 | orchestrator | 2026-01-10 14:40:20.836761 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-01-10 14:40:20.836768 | orchestrator | Saturday 10 January 2026 14:39:41 +0000 (0:00:01.330) 0:06:04.231 ****** 2026-01-10 14:40:20.836775 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:40:20.836783 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:40:20.836790 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:40:20.836797 | orchestrator | 2026-01-10 14:40:20.836804 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-01-10 14:40:20.836811 | orchestrator | Saturday 10 January 2026 14:39:42 +0000 (0:00:00.957) 0:06:05.189 ****** 2026-01-10 14:40:20.836819 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:40:20.836826 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:40:20.836833 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:20.836840 | orchestrator | 2026-01-10 14:40:20.836847 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-01-10 14:40:20.836855 | orchestrator | Saturday 10 January 2026 14:39:46 +0000 (0:00:04.763) 0:06:09.952 ****** 2026-01-10 14:40:20.836862 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:40:20.836869 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:40:20.836876 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:40:20.836883 | orchestrator | 2026-01-10 14:40:20.836891 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-01-10 14:40:20.836898 | orchestrator | Saturday 10 January 2026 14:39:49 +0000 (0:00:02.818) 0:06:12.771 ****** 2026-01-10 14:40:20.836906 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:20.836913 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:40:20.836920 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:40:20.836935 | orchestrator | 2026-01-10 14:40:20.836942 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-01-10 14:40:20.836950 | orchestrator | Saturday 10 January 2026 14:40:03 +0000 (0:00:13.864) 0:06:26.636 ****** 2026-01-10 14:40:20.836957 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:40:20.836964 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:40:20.836971 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:40:20.836978 | orchestrator | 2026-01-10 14:40:20.836986 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-01-10 14:40:20.836993 | orchestrator | Saturday 10 January 2026 14:40:04 +0000 (0:00:01.343) 0:06:27.979 ****** 2026-01-10 14:40:20.837000 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:40:20.837007 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:40:20.837014 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:20.837022 | orchestrator | 2026-01-10 14:40:20.837029 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-01-10 14:40:20.837036 | orchestrator | Saturday 10 January 2026 14:40:09 +0000 (0:00:04.561) 0:06:32.541 ****** 2026-01-10 14:40:20.837043 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.837050 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.837057 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.837065 | orchestrator | 2026-01-10 14:40:20.837072 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-01-10 14:40:20.837079 | orchestrator | Saturday 10 January 2026 14:40:09 +0000 (0:00:00.354) 0:06:32.895 ****** 2026-01-10 14:40:20.837086 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.837093 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.837100 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.837107 | orchestrator | 2026-01-10 14:40:20.837114 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-01-10 14:40:20.837122 | orchestrator | Saturday 10 January 2026 14:40:10 +0000 (0:00:00.362) 0:06:33.258 ****** 2026-01-10 14:40:20.837129 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.837136 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.837143 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.837150 | orchestrator | 2026-01-10 14:40:20.837157 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-01-10 14:40:20.837165 | orchestrator | Saturday 10 January 2026 14:40:10 +0000 (0:00:00.717) 0:06:33.975 ****** 2026-01-10 14:40:20.837172 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.837179 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.837186 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.837193 | orchestrator | 2026-01-10 14:40:20.837200 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-01-10 14:40:20.837207 | orchestrator | Saturday 10 January 2026 14:40:11 +0000 (0:00:00.393) 0:06:34.369 ****** 2026-01-10 14:40:20.837214 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.837221 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.837228 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.837235 | orchestrator | 2026-01-10 14:40:20.837243 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-01-10 14:40:20.837250 | orchestrator | Saturday 10 January 2026 14:40:11 +0000 (0:00:00.376) 0:06:34.745 ****** 2026-01-10 14:40:20.837257 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:20.837264 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:20.837271 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:20.837278 | orchestrator | 2026-01-10 14:40:20.837286 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-01-10 14:40:20.837293 | orchestrator | Saturday 10 January 2026 14:40:11 +0000 (0:00:00.354) 0:06:35.100 ****** 2026-01-10 14:40:20.837300 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:40:20.837307 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:40:20.837314 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:40:20.837321 | orchestrator | 2026-01-10 14:40:20.837329 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-01-10 14:40:20.837341 | orchestrator | Saturday 10 January 2026 14:40:17 +0000 (0:00:05.091) 0:06:40.191 ****** 2026-01-10 14:40:20.837348 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:40:20.837355 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:40:20.837362 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:40:20.837370 | orchestrator | 2026-01-10 14:40:20.837377 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:40:20.837389 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-10 14:40:20.837397 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-10 14:40:20.837405 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-10 14:40:20.837412 | orchestrator | 2026-01-10 14:40:20.837419 | orchestrator | 2026-01-10 14:40:20.837426 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:40:20.837433 | orchestrator | Saturday 10 January 2026 14:40:17 +0000 (0:00:00.772) 0:06:40.964 ****** 2026-01-10 14:40:20.837441 | orchestrator | =============================================================================== 2026-01-10 14:40:20.837448 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.86s 2026-01-10 14:40:20.837455 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 6.84s 2026-01-10 14:40:20.837467 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.31s 2026-01-10 14:40:20.837480 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.74s 2026-01-10 14:40:20.837492 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 5.16s 2026-01-10 14:40:20.837504 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 5.09s 2026-01-10 14:40:20.837613 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.76s 2026-01-10 14:40:20.837646 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 4.73s 2026-01-10 14:40:20.837658 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.63s 2026-01-10 14:40:20.837671 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.60s 2026-01-10 14:40:20.837683 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.59s 2026-01-10 14:40:20.837696 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.57s 2026-01-10 14:40:20.837708 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.56s 2026-01-10 14:40:20.837720 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.38s 2026-01-10 14:40:20.837727 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.28s 2026-01-10 14:40:20.837734 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 4.21s 2026-01-10 14:40:20.837741 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.21s 2026-01-10 14:40:20.837748 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.18s 2026-01-10 14:40:20.837756 | orchestrator | haproxy-config : Copying over ceph-rgw haproxy config ------------------- 4.14s 2026-01-10 14:40:20.837768 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.07s 2026-01-10 14:40:20.837780 | orchestrator | 2026-01-10 14:40:20 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:40:20.837792 | orchestrator | 2026-01-10 14:40:20 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:40:20.837805 | orchestrator | 2026-01-10 14:40:20 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:23.872450 | orchestrator | 2026-01-10 14:40:23 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:40:23.873651 | orchestrator | 2026-01-10 14:40:23 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:40:23.876013 | orchestrator | 2026-01-10 14:40:23 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:40:23.876078 | orchestrator | 2026-01-10 14:40:23 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:26.929496 | orchestrator | 2026-01-10 14:40:26 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:40:26.933068 | orchestrator | 2026-01-10 14:40:26 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:40:26.936538 | orchestrator | 2026-01-10 14:40:26 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:40:26.936640 | orchestrator | 2026-01-10 14:40:26 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:29.991671 | orchestrator | 2026-01-10 14:40:29 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:40:29.991754 | orchestrator | 2026-01-10 14:40:29 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:40:29.994903 | orchestrator | 2026-01-10 14:40:29 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:40:29.994976 | orchestrator | 2026-01-10 14:40:29 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:33.024428 | orchestrator | 2026-01-10 14:40:33 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:40:33.024866 | orchestrator | 2026-01-10 14:40:33 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:40:33.027190 | orchestrator | 2026-01-10 14:40:33 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:40:33.027247 | orchestrator | 2026-01-10 14:40:33 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:36.116731 | orchestrator | 2026-01-10 14:40:36 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:40:36.116978 | orchestrator | 2026-01-10 14:40:36 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:40:36.117789 | orchestrator | 2026-01-10 14:40:36 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:40:36.117833 | orchestrator | 2026-01-10 14:40:36 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:39.148095 | orchestrator | 2026-01-10 14:40:39 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:40:39.150607 | orchestrator | 2026-01-10 14:40:39 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:40:39.151249 | orchestrator | 2026-01-10 14:40:39 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:40:39.151276 | orchestrator | 2026-01-10 14:40:39 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:42.188582 | orchestrator | 2026-01-10 14:40:42 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:40:42.190895 | orchestrator | 2026-01-10 14:40:42 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:40:42.194305 | orchestrator | 2026-01-10 14:40:42 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:40:42.194388 | orchestrator | 2026-01-10 14:40:42 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:45.235316 | orchestrator | 2026-01-10 14:40:45 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:40:45.235382 | orchestrator | 2026-01-10 14:40:45 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:40:45.236118 | orchestrator | 2026-01-10 14:40:45 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:40:45.236922 | orchestrator | 2026-01-10 14:40:45 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:48.267133 | orchestrator | 2026-01-10 14:40:48 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:40:48.268626 | orchestrator | 2026-01-10 14:40:48 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:40:48.271361 | orchestrator | 2026-01-10 14:40:48 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:40:48.271970 | orchestrator | 2026-01-10 14:40:48 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:51.365806 | orchestrator | 2026-01-10 14:40:51 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:40:51.366734 | orchestrator | 2026-01-10 14:40:51 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:40:51.369168 | orchestrator | 2026-01-10 14:40:51 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:40:51.369207 | orchestrator | 2026-01-10 14:40:51 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:54.410522 | orchestrator | 2026-01-10 14:40:54 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:40:54.411403 | orchestrator | 2026-01-10 14:40:54 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:40:54.417867 | orchestrator | 2026-01-10 14:40:54 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:40:54.417912 | orchestrator | 2026-01-10 14:40:54 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:57.460657 | orchestrator | 2026-01-10 14:40:57 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:40:57.461705 | orchestrator | 2026-01-10 14:40:57 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:40:57.462426 | orchestrator | 2026-01-10 14:40:57 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:40:57.462443 | orchestrator | 2026-01-10 14:40:57 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:00.511759 | orchestrator | 2026-01-10 14:41:00 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:41:00.515703 | orchestrator | 2026-01-10 14:41:00 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:41:00.517587 | orchestrator | 2026-01-10 14:41:00 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:41:00.517632 | orchestrator | 2026-01-10 14:41:00 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:03.563435 | orchestrator | 2026-01-10 14:41:03 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:41:03.570327 | orchestrator | 2026-01-10 14:41:03 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:41:03.574191 | orchestrator | 2026-01-10 14:41:03 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:41:03.574245 | orchestrator | 2026-01-10 14:41:03 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:06.622274 | orchestrator | 2026-01-10 14:41:06 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:41:06.624351 | orchestrator | 2026-01-10 14:41:06 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:41:06.626206 | orchestrator | 2026-01-10 14:41:06 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:41:06.626238 | orchestrator | 2026-01-10 14:41:06 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:09.679206 | orchestrator | 2026-01-10 14:41:09 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:41:09.681039 | orchestrator | 2026-01-10 14:41:09 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:41:09.682752 | orchestrator | 2026-01-10 14:41:09 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:41:09.682906 | orchestrator | 2026-01-10 14:41:09 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:12.731552 | orchestrator | 2026-01-10 14:41:12 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:41:12.734853 | orchestrator | 2026-01-10 14:41:12 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:41:12.739680 | orchestrator | 2026-01-10 14:41:12 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:41:12.739762 | orchestrator | 2026-01-10 14:41:12 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:15.798877 | orchestrator | 2026-01-10 14:41:15 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:41:15.801333 | orchestrator | 2026-01-10 14:41:15 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:41:15.803707 | orchestrator | 2026-01-10 14:41:15 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:41:15.803785 | orchestrator | 2026-01-10 14:41:15 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:18.854562 | orchestrator | 2026-01-10 14:41:18 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:41:18.855106 | orchestrator | 2026-01-10 14:41:18 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:41:18.861246 | orchestrator | 2026-01-10 14:41:18 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:41:18.861336 | orchestrator | 2026-01-10 14:41:18 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:21.892082 | orchestrator | 2026-01-10 14:41:21 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:41:21.896695 | orchestrator | 2026-01-10 14:41:21 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:41:21.896767 | orchestrator | 2026-01-10 14:41:21 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:41:21.896774 | orchestrator | 2026-01-10 14:41:21 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:24.944074 | orchestrator | 2026-01-10 14:41:24 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:41:24.946396 | orchestrator | 2026-01-10 14:41:24 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:41:24.948612 | orchestrator | 2026-01-10 14:41:24 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:41:24.948663 | orchestrator | 2026-01-10 14:41:24 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:27.989692 | orchestrator | 2026-01-10 14:41:27 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:41:27.990879 | orchestrator | 2026-01-10 14:41:27 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:41:27.991976 | orchestrator | 2026-01-10 14:41:27 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:41:27.992013 | orchestrator | 2026-01-10 14:41:27 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:31.047020 | orchestrator | 2026-01-10 14:41:31 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:41:31.049321 | orchestrator | 2026-01-10 14:41:31 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:41:31.051572 | orchestrator | 2026-01-10 14:41:31 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:41:31.051614 | orchestrator | 2026-01-10 14:41:31 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:34.102071 | orchestrator | 2026-01-10 14:41:34 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:41:34.103431 | orchestrator | 2026-01-10 14:41:34 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:41:34.105089 | orchestrator | 2026-01-10 14:41:34 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:41:34.105139 | orchestrator | 2026-01-10 14:41:34 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:37.159009 | orchestrator | 2026-01-10 14:41:37 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:41:37.160062 | orchestrator | 2026-01-10 14:41:37 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:41:37.162181 | orchestrator | 2026-01-10 14:41:37 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:41:37.162223 | orchestrator | 2026-01-10 14:41:37 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:40.212647 | orchestrator | 2026-01-10 14:41:40 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:41:40.212726 | orchestrator | 2026-01-10 14:41:40 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:41:40.213864 | orchestrator | 2026-01-10 14:41:40 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:41:40.213931 | orchestrator | 2026-01-10 14:41:40 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:43.264636 | orchestrator | 2026-01-10 14:41:43 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:41:43.266947 | orchestrator | 2026-01-10 14:41:43 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:41:43.269303 | orchestrator | 2026-01-10 14:41:43 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:41:43.269539 | orchestrator | 2026-01-10 14:41:43 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:46.328384 | orchestrator | 2026-01-10 14:41:46 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:41:46.331747 | orchestrator | 2026-01-10 14:41:46 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:41:46.335123 | orchestrator | 2026-01-10 14:41:46 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:41:46.335465 | orchestrator | 2026-01-10 14:41:46 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:49.385607 | orchestrator | 2026-01-10 14:41:49 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:41:49.387370 | orchestrator | 2026-01-10 14:41:49 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:41:49.389151 | orchestrator | 2026-01-10 14:41:49 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:41:49.389255 | orchestrator | 2026-01-10 14:41:49 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:52.434141 | orchestrator | 2026-01-10 14:41:52 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:41:52.435800 | orchestrator | 2026-01-10 14:41:52 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:41:52.439601 | orchestrator | 2026-01-10 14:41:52 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:41:52.439668 | orchestrator | 2026-01-10 14:41:52 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:55.478557 | orchestrator | 2026-01-10 14:41:55 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:41:55.479675 | orchestrator | 2026-01-10 14:41:55 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:41:55.481805 | orchestrator | 2026-01-10 14:41:55 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:41:55.481856 | orchestrator | 2026-01-10 14:41:55 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:58.526168 | orchestrator | 2026-01-10 14:41:58 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:41:58.527742 | orchestrator | 2026-01-10 14:41:58 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:41:58.529533 | orchestrator | 2026-01-10 14:41:58 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:41:58.529567 | orchestrator | 2026-01-10 14:41:58 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:01.572524 | orchestrator | 2026-01-10 14:42:01 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:42:01.574096 | orchestrator | 2026-01-10 14:42:01 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:42:01.576827 | orchestrator | 2026-01-10 14:42:01 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:42:01.577592 | orchestrator | 2026-01-10 14:42:01 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:04.624301 | orchestrator | 2026-01-10 14:42:04 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:42:04.625648 | orchestrator | 2026-01-10 14:42:04 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:42:04.627708 | orchestrator | 2026-01-10 14:42:04 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:42:04.627965 | orchestrator | 2026-01-10 14:42:04 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:07.673529 | orchestrator | 2026-01-10 14:42:07 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:42:07.676019 | orchestrator | 2026-01-10 14:42:07 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:42:07.680718 | orchestrator | 2026-01-10 14:42:07 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:42:07.680778 | orchestrator | 2026-01-10 14:42:07 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:10.725275 | orchestrator | 2026-01-10 14:42:10 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:42:10.727440 | orchestrator | 2026-01-10 14:42:10 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:42:10.729540 | orchestrator | 2026-01-10 14:42:10 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:42:10.729762 | orchestrator | 2026-01-10 14:42:10 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:13.780993 | orchestrator | 2026-01-10 14:42:13 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:42:13.784811 | orchestrator | 2026-01-10 14:42:13 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:42:13.787504 | orchestrator | 2026-01-10 14:42:13 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:42:13.787847 | orchestrator | 2026-01-10 14:42:13 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:16.830602 | orchestrator | 2026-01-10 14:42:16 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:42:16.831364 | orchestrator | 2026-01-10 14:42:16 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:42:16.833078 | orchestrator | 2026-01-10 14:42:16 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:42:16.833135 | orchestrator | 2026-01-10 14:42:16 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:19.884250 | orchestrator | 2026-01-10 14:42:19 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:42:19.886859 | orchestrator | 2026-01-10 14:42:19 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:42:19.889262 | orchestrator | 2026-01-10 14:42:19 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:42:19.889307 | orchestrator | 2026-01-10 14:42:19 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:22.942666 | orchestrator | 2026-01-10 14:42:22 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:42:22.943770 | orchestrator | 2026-01-10 14:42:22 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:42:22.945324 | orchestrator | 2026-01-10 14:42:22 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:42:22.945368 | orchestrator | 2026-01-10 14:42:22 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:26.002335 | orchestrator | 2026-01-10 14:42:25 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:42:26.006611 | orchestrator | 2026-01-10 14:42:26 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:42:26.010619 | orchestrator | 2026-01-10 14:42:26 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:42:26.010709 | orchestrator | 2026-01-10 14:42:26 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:29.059022 | orchestrator | 2026-01-10 14:42:29 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:42:29.061048 | orchestrator | 2026-01-10 14:42:29 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:42:29.061859 | orchestrator | 2026-01-10 14:42:29 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state STARTED 2026-01-10 14:42:29.061908 | orchestrator | 2026-01-10 14:42:29 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:32.111605 | orchestrator | 2026-01-10 14:42:32 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:42:32.113571 | orchestrator | 2026-01-10 14:42:32 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:42:32.114392 | orchestrator | 2026-01-10 14:42:32 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:42:32.119987 | orchestrator | 2026-01-10 14:42:32 | INFO  | Task 17d93e9b-366a-450f-8ecc-93dc96c1cffb is in state SUCCESS 2026-01-10 14:42:32.120114 | orchestrator | 2026-01-10 14:42:32.122289 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-10 14:42:32.122347 | orchestrator | 2.16.14 2026-01-10 14:42:32.122355 | orchestrator | 2026-01-10 14:42:32.122360 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-01-10 14:42:32.122365 | orchestrator | 2026-01-10 14:42:32.122369 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-10 14:42:32.122374 | orchestrator | Saturday 10 January 2026 14:30:59 +0000 (0:00:00.849) 0:00:00.849 ****** 2026-01-10 14:42:32.122379 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:42:32.122384 | orchestrator | 2026-01-10 14:42:32.122388 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-10 14:42:32.122392 | orchestrator | Saturday 10 January 2026 14:31:01 +0000 (0:00:01.240) 0:00:02.089 ****** 2026-01-10 14:42:32.122420 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.122426 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.122430 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.122434 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.122438 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.122459 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.122463 | orchestrator | 2026-01-10 14:42:32.122467 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-10 14:42:32.122471 | orchestrator | Saturday 10 January 2026 14:31:02 +0000 (0:00:01.803) 0:00:03.893 ****** 2026-01-10 14:42:32.122475 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.122479 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.122483 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.122487 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.122490 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.122494 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.122498 | orchestrator | 2026-01-10 14:42:32.122527 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-10 14:42:32.122532 | orchestrator | Saturday 10 January 2026 14:31:03 +0000 (0:00:00.730) 0:00:04.624 ****** 2026-01-10 14:42:32.122536 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.122539 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.122561 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.122565 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.122569 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.122573 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.122576 | orchestrator | 2026-01-10 14:42:32.122580 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-10 14:42:32.122584 | orchestrator | Saturday 10 January 2026 14:31:04 +0000 (0:00:01.033) 0:00:05.657 ****** 2026-01-10 14:42:32.122588 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.122592 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.122596 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.122599 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.122603 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.122607 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.122611 | orchestrator | 2026-01-10 14:42:32.122615 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-10 14:42:32.122619 | orchestrator | Saturday 10 January 2026 14:31:05 +0000 (0:00:00.653) 0:00:06.311 ****** 2026-01-10 14:42:32.122623 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.122627 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.122630 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.122649 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.122654 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.122668 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.122671 | orchestrator | 2026-01-10 14:42:32.122675 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-10 14:42:32.122679 | orchestrator | Saturday 10 January 2026 14:31:05 +0000 (0:00:00.731) 0:00:07.042 ****** 2026-01-10 14:42:32.122790 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.122798 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.122804 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.122810 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.122816 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.122823 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.122829 | orchestrator | 2026-01-10 14:42:32.122836 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-10 14:42:32.122842 | orchestrator | Saturday 10 January 2026 14:31:07 +0000 (0:00:01.173) 0:00:08.215 ****** 2026-01-10 14:42:32.122848 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.122855 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.122861 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.122867 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.122873 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.122879 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.122886 | orchestrator | 2026-01-10 14:42:32.122893 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-10 14:42:32.122900 | orchestrator | Saturday 10 January 2026 14:31:08 +0000 (0:00:00.883) 0:00:09.099 ****** 2026-01-10 14:42:32.122907 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.122913 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.122920 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.122926 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.122953 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.122959 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.122963 | orchestrator | 2026-01-10 14:42:32.122968 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-10 14:42:32.122972 | orchestrator | Saturday 10 January 2026 14:31:09 +0000 (0:00:01.170) 0:00:10.270 ****** 2026-01-10 14:42:32.122977 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-10 14:42:32.122981 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-10 14:42:32.122986 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-10 14:42:32.122990 | orchestrator | 2026-01-10 14:42:32.122995 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-10 14:42:32.122999 | orchestrator | Saturday 10 January 2026 14:31:10 +0000 (0:00:00.833) 0:00:11.104 ****** 2026-01-10 14:42:32.123003 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.123007 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.123011 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.123047 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.123054 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.123062 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.123069 | orchestrator | 2026-01-10 14:42:32.123076 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-10 14:42:32.123101 | orchestrator | Saturday 10 January 2026 14:31:11 +0000 (0:00:01.460) 0:00:12.565 ****** 2026-01-10 14:42:32.123108 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-10 14:42:32.123115 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-10 14:42:32.123122 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-10 14:42:32.123130 | orchestrator | 2026-01-10 14:42:32.123136 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-10 14:42:32.123143 | orchestrator | Saturday 10 January 2026 14:31:14 +0000 (0:00:02.640) 0:00:15.205 ****** 2026-01-10 14:42:32.123150 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-10 14:42:32.123157 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-10 14:42:32.123165 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-10 14:42:32.123170 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.123182 | orchestrator | 2026-01-10 14:42:32.123186 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-10 14:42:32.123191 | orchestrator | Saturday 10 January 2026 14:31:15 +0000 (0:00:00.900) 0:00:16.106 ****** 2026-01-10 14:42:32.123197 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.123204 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.123209 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.123213 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.123218 | orchestrator | 2026-01-10 14:42:32.123222 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-10 14:42:32.123226 | orchestrator | Saturday 10 January 2026 14:31:16 +0000 (0:00:01.016) 0:00:17.123 ****** 2026-01-10 14:42:32.123259 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.123268 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.123274 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.123281 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.123286 | orchestrator | 2026-01-10 14:42:32.123290 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-10 14:42:32.123294 | orchestrator | Saturday 10 January 2026 14:31:16 +0000 (0:00:00.587) 0:00:17.711 ****** 2026-01-10 14:42:32.123305 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-10 14:31:12.263876', 'end': '2026-01-10 14:31:12.528985', 'delta': '0:00:00.265109', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.123312 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-10 14:31:13.107492', 'end': '2026-01-10 14:31:13.302106', 'delta': '0:00:00.194614', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.123322 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-10 14:31:13.717605', 'end': '2026-01-10 14:31:13.922217', 'delta': '0:00:00.204612', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.123326 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.123330 | orchestrator | 2026-01-10 14:42:32.123333 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-10 14:42:32.123337 | orchestrator | Saturday 10 January 2026 14:31:16 +0000 (0:00:00.328) 0:00:18.040 ****** 2026-01-10 14:42:32.123489 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.123496 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.123499 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.123503 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.123507 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.123511 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.123514 | orchestrator | 2026-01-10 14:42:32.123518 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-10 14:42:32.123522 | orchestrator | Saturday 10 January 2026 14:31:19 +0000 (0:00:02.154) 0:00:20.194 ****** 2026-01-10 14:42:32.123526 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:42:32.123530 | orchestrator | 2026-01-10 14:42:32.123537 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-10 14:42:32.123541 | orchestrator | Saturday 10 January 2026 14:31:20 +0000 (0:00:00.982) 0:00:21.177 ****** 2026-01-10 14:42:32.123545 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.123549 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.123555 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.123561 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.123567 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.123573 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.123580 | orchestrator | 2026-01-10 14:42:32.123585 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-10 14:42:32.123591 | orchestrator | Saturday 10 January 2026 14:31:21 +0000 (0:00:01.897) 0:00:23.074 ****** 2026-01-10 14:42:32.123598 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.123604 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.123610 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.123616 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.123622 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.123627 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.123633 | orchestrator | 2026-01-10 14:42:32.123639 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-10 14:42:32.123645 | orchestrator | Saturday 10 January 2026 14:31:23 +0000 (0:00:01.852) 0:00:24.926 ****** 2026-01-10 14:42:32.123651 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.123657 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.123662 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.123669 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.123682 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.123688 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.123694 | orchestrator | 2026-01-10 14:42:32.123699 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-10 14:42:32.123703 | orchestrator | Saturday 10 January 2026 14:31:24 +0000 (0:00:00.933) 0:00:25.860 ****** 2026-01-10 14:42:32.123707 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.123711 | orchestrator | 2026-01-10 14:42:32.123715 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-10 14:42:32.123718 | orchestrator | Saturday 10 January 2026 14:31:24 +0000 (0:00:00.082) 0:00:25.942 ****** 2026-01-10 14:42:32.123722 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.123726 | orchestrator | 2026-01-10 14:42:32.123729 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-10 14:42:32.123733 | orchestrator | Saturday 10 January 2026 14:31:25 +0000 (0:00:00.165) 0:00:26.108 ****** 2026-01-10 14:42:32.123737 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.123741 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.123745 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.123754 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.123758 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.123762 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.123765 | orchestrator | 2026-01-10 14:42:32.123769 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-10 14:42:32.123773 | orchestrator | Saturday 10 January 2026 14:31:25 +0000 (0:00:00.647) 0:00:26.755 ****** 2026-01-10 14:42:32.123777 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.123846 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.123852 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.123858 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.123864 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.123869 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.123875 | orchestrator | 2026-01-10 14:42:32.123881 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-10 14:42:32.123887 | orchestrator | Saturday 10 January 2026 14:31:26 +0000 (0:00:00.898) 0:00:27.654 ****** 2026-01-10 14:42:32.123893 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.123898 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.123904 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.123909 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.123915 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.123922 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.123928 | orchestrator | 2026-01-10 14:42:32.123933 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-10 14:42:32.123940 | orchestrator | Saturday 10 January 2026 14:31:27 +0000 (0:00:00.681) 0:00:28.336 ****** 2026-01-10 14:42:32.123965 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.124023 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.124030 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.124036 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.124043 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.124049 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.124055 | orchestrator | 2026-01-10 14:42:32.124062 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-10 14:42:32.124141 | orchestrator | Saturday 10 January 2026 14:31:28 +0000 (0:00:01.014) 0:00:29.351 ****** 2026-01-10 14:42:32.124151 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.124157 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.124164 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.124170 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.124177 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.124183 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.124189 | orchestrator | 2026-01-10 14:42:32.124206 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-10 14:42:32.124212 | orchestrator | Saturday 10 January 2026 14:31:28 +0000 (0:00:00.536) 0:00:29.887 ****** 2026-01-10 14:42:32.124218 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.124225 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.124231 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.124237 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.124244 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.124250 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.124256 | orchestrator | 2026-01-10 14:42:32.124262 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-10 14:42:32.124270 | orchestrator | Saturday 10 January 2026 14:31:29 +0000 (0:00:00.911) 0:00:30.799 ****** 2026-01-10 14:42:32.124276 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.124282 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.124295 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.124301 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.124308 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.124314 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.124319 | orchestrator | 2026-01-10 14:42:32.124325 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-10 14:42:32.124332 | orchestrator | Saturday 10 January 2026 14:31:30 +0000 (0:00:00.867) 0:00:31.667 ****** 2026-01-10 14:42:32.124341 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--afcf3728--3a76--5607--aebb--61451d8643bd-osd--block--afcf3728--3a76--5607--aebb--61451d8643bd', 'dm-uuid-LVM-3EtkfyBxqllZGPVj4jX11hTg3QJalLi9ufUqhVZyz9vaMSCvbVz9QMTCeSCNfKHd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.124351 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7d69473f--eeb6--5b22--bf27--181ed9eac77f-osd--block--7d69473f--eeb6--5b22--bf27--181ed9eac77f', 'dm-uuid-LVM-rJQmIINvGCLvyiYBR7BPCsE0l9Ac7YGetpM2LEc5JOr63yjDWOuKcaTFCCdwRmte'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.124369 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.124416 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.124423 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.124437 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.124519 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.124618 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8bd1ebb6--f1fa--58d8--b8a2--53a51729cfca-osd--block--8bd1ebb6--f1fa--58d8--b8a2--53a51729cfca', 'dm-uuid-LVM-2Fwxc5Ai0VKcdRkNWbyH7mgikuiNPcDqz4zN2NxBNprPiRAzwpTBwbkt5aHRRx46'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.124627 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.124634 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d6926eeb--1396--512c--9972--e44f7d919ea4-osd--block--d6926eeb--1396--512c--9972--e44f7d919ea4', 'dm-uuid-LVM-TVbW5CMftcxSXy1c5xps3v2GvblaD84SboJE4C3svpS8uL1HxSuFZkgv8JDsZseN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.124640 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.124680 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.124688 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.124705 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927', 'scsi-SQEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927-part1', 'scsi-SQEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927-part14', 'scsi-SQEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927-part15', 'scsi-SQEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927-part16', 'scsi-SQEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:42:32.124721 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.124729 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--afcf3728--3a76--5607--aebb--61451d8643bd-osd--block--afcf3728--3a76--5607--aebb--61451d8643bd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9qr1pZ-RkHo-c3FE-UdXw-MB1l-GSnT-LlveUO', 'scsi-0QEMU_QEMU_HARDDISK_15b7e91d-d5fd-4068-ac98-0857e3d5fdf2', 'scsi-SQEMU_QEMU_HARDDISK_15b7e91d-d5fd-4068-ac98-0857e3d5fdf2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:42:32.124744 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.124751 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.124766 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.124773 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.124783 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--7d69473f--eeb6--5b22--bf27--181ed9eac77f-osd--block--7d69473f--eeb6--5b22--bf27--181ed9eac77f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-huiRGz-aiBY-e8Ey-pv2n-4eFw-rHyE-ibrNbS', 'scsi-0QEMU_QEMU_HARDDISK_b1ccfdf8-8ebc-4581-b39b-71c057731eea', 'scsi-SQEMU_QEMU_HARDDISK_b1ccfdf8-8ebc-4581-b39b-71c057731eea'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:42:32.124790 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.124797 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.124809 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51', 'scsi-SQEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51-part1', 'scsi-SQEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51-part14', 'scsi-SQEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51-part15', 'scsi-SQEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51-part16', 'scsi-SQEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:42:32.124822 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8bd1ebb6--f1fa--58d8--b8a2--53a51729cfca-osd--block--8bd1ebb6--f1fa--58d8--b8a2--53a51729cfca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-s2tKbb-biWW-rhor-AR6o-qWz0-GpRW-LWXLkj', 'scsi-0QEMU_QEMU_HARDDISK_44339de3-07bb-4d03-9d3b-2e0777e51af2', 'scsi-SQEMU_QEMU_HARDDISK_44339de3-07bb-4d03-9d3b-2e0777e51af2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:42:32.124855 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d6926eeb--1396--512c--9972--e44f7d919ea4-osd--block--d6926eeb--1396--512c--9972--e44f7d919ea4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JNgVMk-aGs6-lbeJ-KhY4-gEYt-H63q-Mfmq4b', 'scsi-0QEMU_QEMU_HARDDISK_09bcf5bb-a136-4209-bef3-37a648ec73be', 'scsi-SQEMU_QEMU_HARDDISK_09bcf5bb-a136-4209-bef3-37a648ec73be'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:42:32.124865 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c3c9ac61-c03e-421c-9f43-37b1f8399a84', 'scsi-SQEMU_QEMU_HARDDISK_c3c9ac61-c03e-421c-9f43-37b1f8399a84'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:42:32.124873 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c56b24e3-125a-48ee-acc4-7420ce900c20', 'scsi-SQEMU_QEMU_HARDDISK_c56b24e3-125a-48ee-acc4-7420ce900c20'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:42:32.125974 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:42:32.126067 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:42:32.126078 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--377cb61f--8fa6--58d2--888b--072b5e96ec0c-osd--block--377cb61f--8fa6--58d2--888b--072b5e96ec0c', 'dm-uuid-LVM-vZkjZSQHbS0q2GyNOh44hFZjUTSzvcamynRYm7ghd2xWRzADM7zfTvvhOTQ6ZkFq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.126086 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.126099 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--82a5292d--e4f5--5675--b04e--23ddf5e1abb7-osd--block--82a5292d--e4f5--5675--b04e--23ddf5e1abb7', 'dm-uuid-LVM-DRgormTtPowM6Igp9Je7HfxfYSL52AtszM3oBsEeG5RiUP3wrwR8QJdi01POVrmD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.126106 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.126114 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.126121 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.126127 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.126188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.126204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.126210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.126216 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.126223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.126233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.126239 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.126245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.126251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.126298 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.126312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.126324 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456', 'scsi-SQEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456-part1', 'scsi-SQEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456-part14', 'scsi-SQEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456-part15', 'scsi-SQEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456-part16', 'scsi-SQEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:42:32.126332 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--377cb61f--8fa6--58d2--888b--072b5e96ec0c-osd--block--377cb61f--8fa6--58d2--888b--072b5e96ec0c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fDJB5p-jnnX-ZrSt-I40a-mAqp-Scoe-YsWpaI', 'scsi-0QEMU_QEMU_HARDDISK_f0ceade3-8439-47c2-ab29-dba6a2d0af37', 'scsi-SQEMU_QEMU_HARDDISK_f0ceade3-8439-47c2-ab29-dba6a2d0af37'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:42:32.126339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.126349 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.126397 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--82a5292d--e4f5--5675--b04e--23ddf5e1abb7-osd--block--82a5292d--e4f5--5675--b04e--23ddf5e1abb7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qtVK5u-Tw8C-lwsP-BX4N-dhRe-17lE-qDl3gH', 'scsi-0QEMU_QEMU_HARDDISK_7172e707-12eb-4bf8-889d-ca95993faa89', 'scsi-SQEMU_QEMU_HARDDISK_7172e707-12eb-4bf8-889d-ca95993faa89'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:42:32.126412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_77c5dc10-1db3-4eb7-96b3-7516ed6edf54', 'scsi-SQEMU_QEMU_HARDDISK_77c5dc10-1db3-4eb7-96b3-7516ed6edf54'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_77c5dc10-1db3-4eb7-96b3-7516ed6edf54-part1', 'scsi-SQEMU_QEMU_HARDDISK_77c5dc10-1db3-4eb7-96b3-7516ed6edf54-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_77c5dc10-1db3-4eb7-96b3-7516ed6edf54-part14', 'scsi-SQEMU_QEMU_HARDDISK_77c5dc10-1db3-4eb7-96b3-7516ed6edf54-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_77c5dc10-1db3-4eb7-96b3-7516ed6edf54-part15', 'scsi-SQEMU_QEMU_HARDDISK_77c5dc10-1db3-4eb7-96b3-7516ed6edf54-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_77c5dc10-1db3-4eb7-96b3-7516ed6edf54-part16', 'scsi-SQEMU_QEMU_HARDDISK_77c5dc10-1db3-4eb7-96b3-7516ed6edf54-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:42:32.126420 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_573345f2-5167-4bb0-bd40-d392a39279fc', 'scsi-SQEMU_QEMU_HARDDISK_573345f2-5167-4bb0-bd40-d392a39279fc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:42:32.126426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:42:32.126506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.126578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.126587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.126594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.126601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.126607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.126619 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-46-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:42:32.126625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.126632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.126702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e49f993b-cdad-4d7e-9728-2cad134db285', 'scsi-SQEMU_QEMU_HARDDISK_e49f993b-cdad-4d7e-9728-2cad134db285'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e49f993b-cdad-4d7e-9728-2cad134db285-part1', 'scsi-SQEMU_QEMU_HARDDISK_e49f993b-cdad-4d7e-9728-2cad134db285-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e49f993b-cdad-4d7e-9728-2cad134db285-part14', 'scsi-SQEMU_QEMU_HARDDISK_e49f993b-cdad-4d7e-9728-2cad134db285-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e49f993b-cdad-4d7e-9728-2cad134db285-part15', 'scsi-SQEMU_QEMU_HARDDISK_e49f993b-cdad-4d7e-9728-2cad134db285-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e49f993b-cdad-4d7e-9728-2cad134db285-part16', 'scsi-SQEMU_QEMU_HARDDISK_e49f993b-cdad-4d7e-9728-2cad134db285-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:42:32.126714 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.126721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:42:32.126736 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.126742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.126749 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.126755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.126768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.126774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.126845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.126855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.126861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.126867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:42:32.126880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c01e5313-3aea-4f62-a892-f2183ae77e08', 'scsi-SQEMU_QEMU_HARDDISK_c01e5313-3aea-4f62-a892-f2183ae77e08'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c01e5313-3aea-4f62-a892-f2183ae77e08-part1', 'scsi-SQEMU_QEMU_HARDDISK_c01e5313-3aea-4f62-a892-f2183ae77e08-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c01e5313-3aea-4f62-a892-f2183ae77e08-part14', 'scsi-SQEMU_QEMU_HARDDISK_c01e5313-3aea-4f62-a892-f2183ae77e08-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c01e5313-3aea-4f62-a892-f2183ae77e08-part15', 'scsi-SQEMU_QEMU_HARDDISK_c01e5313-3aea-4f62-a892-f2183ae77e08-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c01e5313-3aea-4f62-a892-f2183ae77e08-part16', 'scsi-SQEMU_QEMU_HARDDISK_c01e5313-3aea-4f62-a892-f2183ae77e08-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:42:32.126941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:42:32.126950 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.126957 | orchestrator | 2026-01-10 14:42:32.126964 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-10 14:42:32.126971 | orchestrator | Saturday 10 January 2026 14:31:32 +0000 (0:00:02.233) 0:00:33.901 ****** 2026-01-10 14:42:32.126979 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8bd1ebb6--f1fa--58d8--b8a2--53a51729cfca-osd--block--8bd1ebb6--f1fa--58d8--b8a2--53a51729cfca', 'dm-uuid-LVM-2Fwxc5Ai0VKcdRkNWbyH7mgikuiNPcDqz4zN2NxBNprPiRAzwpTBwbkt5aHRRx46'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.126987 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--afcf3728--3a76--5607--aebb--61451d8643bd-osd--block--afcf3728--3a76--5607--aebb--61451d8643bd', 'dm-uuid-LVM-3EtkfyBxqllZGPVj4jX11hTg3QJalLi9ufUqhVZyz9vaMSCvbVz9QMTCeSCNfKHd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.126997 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7d69473f--eeb6--5b22--bf27--181ed9eac77f-osd--block--7d69473f--eeb6--5b22--bf27--181ed9eac77f', 'dm-uuid-LVM-rJQmIINvGCLvyiYBR7BPCsE0l9Ac7YGetpM2LEc5JOr63yjDWOuKcaTFCCdwRmte'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127011 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127080 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d6926eeb--1396--512c--9972--e44f7d919ea4-osd--block--d6926eeb--1396--512c--9972--e44f7d919ea4', 'dm-uuid-LVM-TVbW5CMftcxSXy1c5xps3v2GvblaD84SboJE4C3svpS8uL1HxSuFZkgv8JDsZseN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127091 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127099 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127105 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--377cb61f--8fa6--58d2--888b--072b5e96ec0c-osd--block--377cb61f--8fa6--58d2--888b--072b5e96ec0c', 'dm-uuid-LVM-vZkjZSQHbS0q2GyNOh44hFZjUTSzvcamynRYm7ghd2xWRzADM7zfTvvhOTQ6ZkFq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127117 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127130 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127185 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--82a5292d--e4f5--5675--b04e--23ddf5e1abb7-osd--block--82a5292d--e4f5--5675--b04e--23ddf5e1abb7', 'dm-uuid-LVM-DRgormTtPowM6Igp9Je7HfxfYSL52AtszM3oBsEeG5RiUP3wrwR8QJdi01POVrmD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127194 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127200 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127206 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127217 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127229 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127236 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127271 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127279 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127286 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127306 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127317 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127329 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127336 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127385 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127394 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127401 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127407 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127488 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51', 'scsi-SQEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51-part1', 'scsi-SQEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51-part14', 'scsi-SQEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51-part15', 'scsi-SQEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51-part16', 'scsi-SQEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127500 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8bd1ebb6--f1fa--58d8--b8a2--53a51729cfca-osd--block--8bd1ebb6--f1fa--58d8--b8a2--53a51729cfca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-s2tKbb-biWW-rhor-AR6o-qWz0-GpRW-LWXLkj', 'scsi-0QEMU_QEMU_HARDDISK_44339de3-07bb-4d03-9d3b-2e0777e51af2', 'scsi-SQEMU_QEMU_HARDDISK_44339de3-07bb-4d03-9d3b-2e0777e51af2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127511 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d6926eeb--1396--512c--9972--e44f7d919ea4-osd--block--d6926eeb--1396--512c--9972--e44f7d919ea4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JNgVMk-aGs6-lbeJ-KhY4-gEYt-H63q-Mfmq4b', 'scsi-0QEMU_QEMU_HARDDISK_09bcf5bb-a136-4209-bef3-37a648ec73be', 'scsi-SQEMU_QEMU_HARDDISK_09bcf5bb-a136-4209-bef3-37a648ec73be'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127526 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127533 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127582 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927', 'scsi-SQEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927-part1', 'scsi-SQEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927-part14', 'scsi-SQEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927-part15', 'scsi-SQEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927-part16', 'scsi-SQEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127595 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c56b24e3-125a-48ee-acc4-7420ce900c20', 'scsi-SQEMU_QEMU_HARDDISK_c56b24e3-125a-48ee-acc4-7420ce900c20'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127622 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127629 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127681 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--afcf3728--3a76--5607--aebb--61451d8643bd-osd--block--afcf3728--3a76--5607--aebb--61451d8643bd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9qr1pZ-RkHo-c3FE-UdXw-MB1l-GSnT-LlveUO', 'scsi-0QEMU_QEMU_HARDDISK_15b7e91d-d5fd-4068-ac98-0857e3d5fdf2', 'scsi-SQEMU_QEMU_HARDDISK_15b7e91d-d5fd-4068-ac98-0857e3d5fdf2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127689 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127700 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456', 'scsi-SQEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456-part1', 'scsi-SQEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456-part14', 'scsi-SQEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456-part15', 'scsi-SQEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456-part16', 'scsi-SQEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127751 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127760 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127766 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--7d69473f--eeb6--5b22--bf27--181ed9eac77f-osd--block--7d69473f--eeb6--5b22--bf27--181ed9eac77f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-huiRGz-aiBY-e8Ey-pv2n-4eFw-rHyE-ibrNbS', 'scsi-0QEMU_QEMU_HARDDISK_b1ccfdf8-8ebc-4581-b39b-71c057731eea', 'scsi-SQEMU_QEMU_HARDDISK_b1ccfdf8-8ebc-4581-b39b-71c057731eea'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127796 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127805 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.127812 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127819 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--377cb61f--8fa6--58d2--888b--072b5e96ec0c-osd--block--377cb61f--8fa6--58d2--888b--072b5e96ec0c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fDJB5p-jnnX-ZrSt-I40a-mAqp-Scoe-YsWpaI', 'scsi-0QEMU_QEMU_HARDDISK_f0ceade3-8439-47c2-ab29-dba6a2d0af37', 'scsi-SQEMU_QEMU_HARDDISK_f0ceade3-8439-47c2-ab29-dba6a2d0af37'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127869 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127878 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--82a5292d--e4f5--5675--b04e--23ddf5e1abb7-osd--block--82a5292d--e4f5--5675--b04e--23ddf5e1abb7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qtVK5u-Tw8C-lwsP-BX4N-dhRe-17lE-qDl3gH', 'scsi-0QEMU_QEMU_HARDDISK_7172e707-12eb-4bf8-889d-ca95993faa89', 'scsi-SQEMU_QEMU_HARDDISK_7172e707-12eb-4bf8-889d-ca95993faa89'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127890 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127899 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127905 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_573345f2-5167-4bb0-bd40-d392a39279fc', 'scsi-SQEMU_QEMU_HARDDISK_573345f2-5167-4bb0-bd40-d392a39279fc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127951 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127960 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127967 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c3c9ac61-c03e-421c-9f43-37b1f8399a84', 'scsi-SQEMU_QEMU_HARDDISK_c3c9ac61-c03e-421c-9f43-37b1f8399a84'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127978 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-46-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127988 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.127994 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.128012 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.128056 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.128065 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.128077 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.128090 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.128096 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.128106 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.128113 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.128119 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.128166 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_77c5dc10-1db3-4eb7-96b3-7516ed6edf54', 'scsi-SQEMU_QEMU_HARDDISK_77c5dc10-1db3-4eb7-96b3-7516ed6edf54'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_77c5dc10-1db3-4eb7-96b3-7516ed6edf54-part1', 'scsi-SQEMU_QEMU_HARDDISK_77c5dc10-1db3-4eb7-96b3-7516ed6edf54-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_77c5dc10-1db3-4eb7-96b3-7516ed6edf54-part14', 'scsi-SQEMU_QEMU_HARDDISK_77c5dc10-1db3-4eb7-96b3-7516ed6edf54-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_77c5dc10-1db3-4eb7-96b3-7516ed6edf54-part15', 'scsi-SQEMU_QEMU_HARDDISK_77c5dc10-1db3-4eb7-96b3-7516ed6edf54-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_77c5dc10-1db3-4eb7-96b3-7516ed6edf54-part16', 'scsi-SQEMU_QEMU_HARDDISK_77c5dc10-1db3-4eb7-96b3-7516ed6edf54-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.128192 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.128199 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.128258 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.128267 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.128282 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.128289 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.128335 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e49f993b-cdad-4d7e-9728-2cad134db285', 'scsi-SQEMU_QEMU_HARDDISK_e49f993b-cdad-4d7e-9728-2cad134db285'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e49f993b-cdad-4d7e-9728-2cad134db285-part1', 'scsi-SQEMU_QEMU_HARDDISK_e49f993b-cdad-4d7e-9728-2cad134db285-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e49f993b-cdad-4d7e-9728-2cad134db285-part14', 'scsi-SQEMU_QEMU_HARDDISK_e49f993b-cdad-4d7e-9728-2cad134db285-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e49f993b-cdad-4d7e-9728-2cad134db285-part15', 'scsi-SQEMU_QEMU_HARDDISK_e49f993b-cdad-4d7e-9728-2cad134db285-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e49f993b-cdad-4d7e-9728-2cad134db285-part16', 'scsi-SQEMU_QEMU_HARDDISK_e49f993b-cdad-4d7e-9728-2cad134db285-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.128425 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c01e5313-3aea-4f62-a892-f2183ae77e08', 'scsi-SQEMU_QEMU_HARDDISK_c01e5313-3aea-4f62-a892-f2183ae77e08'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c01e5313-3aea-4f62-a892-f2183ae77e08-part1', 'scsi-SQEMU_QEMU_HARDDISK_c01e5313-3aea-4f62-a892-f2183ae77e08-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c01e5313-3aea-4f62-a892-f2183ae77e08-part14', 'scsi-SQEMU_QEMU_HARDDISK_c01e5313-3aea-4f62-a892-f2183ae77e08-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c01e5313-3aea-4f62-a892-f2183ae77e08-part15', 'scsi-SQEMU_QEMU_HARDDISK_c01e5313-3aea-4f62-a892-f2183ae77e08-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c01e5313-3aea-4f62-a892-f2183ae77e08-part16', 'scsi-SQEMU_QEMU_HARDDISK_c01e5313-3aea-4f62-a892-f2183ae77e08-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.128477 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.128486 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:42:32.128492 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.128499 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.128505 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.128510 | orchestrator | 2026-01-10 14:42:32.128577 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-10 14:42:32.128588 | orchestrator | Saturday 10 January 2026 14:31:34 +0000 (0:00:02.113) 0:00:36.015 ****** 2026-01-10 14:42:32.128595 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.128602 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.128608 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.128614 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.128626 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.128632 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.128639 | orchestrator | 2026-01-10 14:42:32.128645 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-10 14:42:32.128651 | orchestrator | Saturday 10 January 2026 14:31:36 +0000 (0:00:01.826) 0:00:37.841 ****** 2026-01-10 14:42:32.128657 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.128662 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.128668 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.128674 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.128680 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.128685 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.128691 | orchestrator | 2026-01-10 14:42:32.128697 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-10 14:42:32.128703 | orchestrator | Saturday 10 January 2026 14:31:38 +0000 (0:00:01.512) 0:00:39.354 ****** 2026-01-10 14:42:32.128710 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.128716 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.128722 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.128728 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.128733 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.128737 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.128741 | orchestrator | 2026-01-10 14:42:32.128745 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-10 14:42:32.128748 | orchestrator | Saturday 10 January 2026 14:31:39 +0000 (0:00:01.196) 0:00:40.550 ****** 2026-01-10 14:42:32.128752 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.128756 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.128759 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.128763 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.128767 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.128771 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.128774 | orchestrator | 2026-01-10 14:42:32.128781 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-10 14:42:32.128786 | orchestrator | Saturday 10 January 2026 14:31:40 +0000 (0:00:00.852) 0:00:41.403 ****** 2026-01-10 14:42:32.128792 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.128799 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.128804 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.128810 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.128816 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.128822 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.128828 | orchestrator | 2026-01-10 14:42:32.128835 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-10 14:42:32.128841 | orchestrator | Saturday 10 January 2026 14:31:41 +0000 (0:00:01.452) 0:00:42.856 ****** 2026-01-10 14:42:32.128848 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.128854 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.128861 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.128865 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.128871 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.128877 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.128883 | orchestrator | 2026-01-10 14:42:32.128895 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-10 14:42:32.128902 | orchestrator | Saturday 10 January 2026 14:31:42 +0000 (0:00:01.080) 0:00:43.936 ****** 2026-01-10 14:42:32.128908 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-10 14:42:32.128915 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-10 14:42:32.128937 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-10 14:42:32.128944 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-10 14:42:32.128951 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-10 14:42:32.128958 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-10 14:42:32.128970 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-10 14:42:32.128977 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-10 14:42:32.128983 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-10 14:42:32.128990 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-10 14:42:32.128996 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-01-10 14:42:32.129002 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-10 14:42:32.129009 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-01-10 14:42:32.129016 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-01-10 14:42:32.129022 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-01-10 14:42:32.129028 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-10 14:42:32.129035 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-01-10 14:42:32.129041 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-01-10 14:42:32.129047 | orchestrator | 2026-01-10 14:42:32.129053 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-10 14:42:32.129059 | orchestrator | Saturday 10 January 2026 14:31:46 +0000 (0:00:03.534) 0:00:47.471 ****** 2026-01-10 14:42:32.129066 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-10 14:42:32.129072 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-10 14:42:32.129078 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-10 14:42:32.129084 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-10 14:42:32.129091 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-10 14:42:32.129097 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-10 14:42:32.129103 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.129109 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-10 14:42:32.129152 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-10 14:42:32.129160 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-10 14:42:32.129166 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.129173 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-10 14:42:32.129179 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.129185 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-10 14:42:32.129191 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-10 14:42:32.129197 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.129202 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-10 14:42:32.129208 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-10 14:42:32.129214 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-10 14:42:32.129221 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.129227 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-10 14:42:32.129235 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-10 14:42:32.129242 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-10 14:42:32.129249 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.129261 | orchestrator | 2026-01-10 14:42:32.129273 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-10 14:42:32.129285 | orchestrator | Saturday 10 January 2026 14:31:47 +0000 (0:00:01.008) 0:00:48.479 ****** 2026-01-10 14:42:32.129297 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.129309 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.129318 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.129325 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:42:32.129332 | orchestrator | 2026-01-10 14:42:32.129339 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-10 14:42:32.129353 | orchestrator | Saturday 10 January 2026 14:31:48 +0000 (0:00:01.320) 0:00:49.800 ****** 2026-01-10 14:42:32.129360 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.129366 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.129373 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.129380 | orchestrator | 2026-01-10 14:42:32.129386 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-10 14:42:32.129393 | orchestrator | Saturday 10 January 2026 14:31:49 +0000 (0:00:00.301) 0:00:50.101 ****** 2026-01-10 14:42:32.129422 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.129429 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.129436 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.129479 | orchestrator | 2026-01-10 14:42:32.129487 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-10 14:42:32.129494 | orchestrator | Saturday 10 January 2026 14:31:49 +0000 (0:00:00.499) 0:00:50.601 ****** 2026-01-10 14:42:32.129500 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.129507 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.129513 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.129520 | orchestrator | 2026-01-10 14:42:32.129527 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-10 14:42:32.129539 | orchestrator | Saturday 10 January 2026 14:31:50 +0000 (0:00:01.067) 0:00:51.668 ****** 2026-01-10 14:42:32.129546 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.129552 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.129559 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.129565 | orchestrator | 2026-01-10 14:42:32.129572 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-10 14:42:32.129579 | orchestrator | Saturday 10 January 2026 14:31:51 +0000 (0:00:00.649) 0:00:52.318 ****** 2026-01-10 14:42:32.129585 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:42:32.129592 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:42:32.129598 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:42:32.129604 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.129611 | orchestrator | 2026-01-10 14:42:32.129617 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-10 14:42:32.129624 | orchestrator | Saturday 10 January 2026 14:31:52 +0000 (0:00:00.957) 0:00:53.276 ****** 2026-01-10 14:42:32.129630 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:42:32.129636 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:42:32.129642 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:42:32.129649 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.129655 | orchestrator | 2026-01-10 14:42:32.129662 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-10 14:42:32.129668 | orchestrator | Saturday 10 January 2026 14:31:52 +0000 (0:00:00.490) 0:00:53.766 ****** 2026-01-10 14:42:32.129675 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:42:32.129681 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:42:32.129687 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:42:32.129692 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.129698 | orchestrator | 2026-01-10 14:42:32.129703 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-10 14:42:32.129709 | orchestrator | Saturday 10 January 2026 14:31:53 +0000 (0:00:00.411) 0:00:54.178 ****** 2026-01-10 14:42:32.129715 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.129720 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.129727 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.129733 | orchestrator | 2026-01-10 14:42:32.129739 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-10 14:42:32.129753 | orchestrator | Saturday 10 January 2026 14:31:53 +0000 (0:00:00.430) 0:00:54.609 ****** 2026-01-10 14:42:32.129760 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-10 14:42:32.129767 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-10 14:42:32.129816 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-10 14:42:32.129824 | orchestrator | 2026-01-10 14:42:32.129831 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-10 14:42:32.129838 | orchestrator | Saturday 10 January 2026 14:31:55 +0000 (0:00:01.677) 0:00:56.287 ****** 2026-01-10 14:42:32.129845 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-10 14:42:32.129851 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-10 14:42:32.129858 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-10 14:42:32.129865 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-10 14:42:32.129872 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-10 14:42:32.129878 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-10 14:42:32.129885 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-10 14:42:32.129892 | orchestrator | 2026-01-10 14:42:32.129898 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-10 14:42:32.129905 | orchestrator | Saturday 10 January 2026 14:31:56 +0000 (0:00:00.959) 0:00:57.246 ****** 2026-01-10 14:42:32.129912 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-10 14:42:32.129919 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-10 14:42:32.129926 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-10 14:42:32.129933 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-10 14:42:32.129940 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-10 14:42:32.129946 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-10 14:42:32.129954 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-10 14:42:32.129960 | orchestrator | 2026-01-10 14:42:32.129967 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-10 14:42:32.129974 | orchestrator | Saturday 10 January 2026 14:31:58 +0000 (0:00:01.957) 0:00:59.204 ****** 2026-01-10 14:42:32.129982 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:42:32.129990 | orchestrator | 2026-01-10 14:42:32.129997 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-10 14:42:32.130004 | orchestrator | Saturday 10 January 2026 14:31:59 +0000 (0:00:01.299) 0:01:00.503 ****** 2026-01-10 14:42:32.130011 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:42:32.130071 | orchestrator | 2026-01-10 14:42:32.130083 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-10 14:42:32.130091 | orchestrator | Saturday 10 January 2026 14:32:00 +0000 (0:00:01.425) 0:01:01.929 ****** 2026-01-10 14:42:32.130098 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.130105 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.130112 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.130119 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.130126 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.130133 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.130140 | orchestrator | 2026-01-10 14:42:32.130147 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-10 14:42:32.130160 | orchestrator | Saturday 10 January 2026 14:32:02 +0000 (0:00:01.452) 0:01:03.382 ****** 2026-01-10 14:42:32.130167 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.130174 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.130181 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.130188 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.130195 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.130202 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.130209 | orchestrator | 2026-01-10 14:42:32.130216 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-10 14:42:32.130224 | orchestrator | Saturday 10 January 2026 14:32:03 +0000 (0:00:01.164) 0:01:04.547 ****** 2026-01-10 14:42:32.130231 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.130238 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.130245 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.130252 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.130259 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.130266 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.130272 | orchestrator | 2026-01-10 14:42:32.130278 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-10 14:42:32.130284 | orchestrator | Saturday 10 January 2026 14:32:04 +0000 (0:00:01.149) 0:01:05.696 ****** 2026-01-10 14:42:32.130290 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.130296 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.130302 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.130307 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.130313 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.130320 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.130326 | orchestrator | 2026-01-10 14:42:32.130333 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-10 14:42:32.130339 | orchestrator | Saturday 10 January 2026 14:32:05 +0000 (0:00:01.051) 0:01:06.748 ****** 2026-01-10 14:42:32.130345 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.130352 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.130358 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.130364 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.130371 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.130406 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.130414 | orchestrator | 2026-01-10 14:42:32.130420 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-10 14:42:32.130426 | orchestrator | Saturday 10 January 2026 14:32:07 +0000 (0:00:01.487) 0:01:08.236 ****** 2026-01-10 14:42:32.130433 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.130438 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.130490 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.130496 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.130502 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.130508 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.130515 | orchestrator | 2026-01-10 14:42:32.130521 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-10 14:42:32.130527 | orchestrator | Saturday 10 January 2026 14:32:07 +0000 (0:00:00.767) 0:01:09.003 ****** 2026-01-10 14:42:32.130533 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.130539 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.130544 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.130550 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.130556 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.130562 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.130568 | orchestrator | 2026-01-10 14:42:32.130573 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-10 14:42:32.130579 | orchestrator | Saturday 10 January 2026 14:32:09 +0000 (0:00:01.169) 0:01:10.173 ****** 2026-01-10 14:42:32.130585 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.130591 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.130605 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.130611 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.130617 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.130623 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.130629 | orchestrator | 2026-01-10 14:42:32.130635 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-10 14:42:32.130640 | orchestrator | Saturday 10 January 2026 14:32:10 +0000 (0:00:01.860) 0:01:12.033 ****** 2026-01-10 14:42:32.130646 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.130652 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.130658 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.130664 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.130673 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.130683 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.130689 | orchestrator | 2026-01-10 14:42:32.130695 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-10 14:42:32.130701 | orchestrator | Saturday 10 January 2026 14:32:13 +0000 (0:00:02.414) 0:01:14.448 ****** 2026-01-10 14:42:32.130707 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.130714 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.130719 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.130725 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.130732 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.130739 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.130746 | orchestrator | 2026-01-10 14:42:32.130751 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-10 14:42:32.130758 | orchestrator | Saturday 10 January 2026 14:32:13 +0000 (0:00:00.510) 0:01:14.958 ****** 2026-01-10 14:42:32.130763 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.130769 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.130776 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.130782 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.130787 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.130793 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.130799 | orchestrator | 2026-01-10 14:42:32.130812 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-10 14:42:32.130818 | orchestrator | Saturday 10 January 2026 14:32:14 +0000 (0:00:00.748) 0:01:15.707 ****** 2026-01-10 14:42:32.130824 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.130829 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.130835 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.130841 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.130847 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.130852 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.130859 | orchestrator | 2026-01-10 14:42:32.130865 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-10 14:42:32.130870 | orchestrator | Saturday 10 January 2026 14:32:15 +0000 (0:00:00.891) 0:01:16.598 ****** 2026-01-10 14:42:32.130876 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.130882 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.130888 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.130894 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.130901 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.130907 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.130912 | orchestrator | 2026-01-10 14:42:32.130919 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-10 14:42:32.130925 | orchestrator | Saturday 10 January 2026 14:32:16 +0000 (0:00:00.776) 0:01:17.375 ****** 2026-01-10 14:42:32.130931 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.130937 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.130943 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.130949 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.130956 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.130962 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.130974 | orchestrator | 2026-01-10 14:42:32.130980 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-10 14:42:32.130987 | orchestrator | Saturday 10 January 2026 14:32:17 +0000 (0:00:00.826) 0:01:18.201 ****** 2026-01-10 14:42:32.130993 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.130999 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.131006 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.131012 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.131018 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.131025 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.131031 | orchestrator | 2026-01-10 14:42:32.131038 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-10 14:42:32.131044 | orchestrator | Saturday 10 January 2026 14:32:18 +0000 (0:00:01.276) 0:01:19.478 ****** 2026-01-10 14:42:32.131050 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.131056 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.131062 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.131068 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.131110 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.131117 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.131123 | orchestrator | 2026-01-10 14:42:32.131129 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-10 14:42:32.131135 | orchestrator | Saturday 10 January 2026 14:32:19 +0000 (0:00:00.972) 0:01:20.451 ****** 2026-01-10 14:42:32.131141 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.131146 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.131152 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.131158 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.131164 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.131170 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.131176 | orchestrator | 2026-01-10 14:42:32.131182 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-10 14:42:32.131189 | orchestrator | Saturday 10 January 2026 14:32:20 +0000 (0:00:01.469) 0:01:21.920 ****** 2026-01-10 14:42:32.131195 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.131201 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.131207 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.131213 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.131218 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.131224 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.131230 | orchestrator | 2026-01-10 14:42:32.131236 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-10 14:42:32.131243 | orchestrator | Saturday 10 January 2026 14:32:21 +0000 (0:00:00.831) 0:01:22.751 ****** 2026-01-10 14:42:32.131249 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.131255 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.131261 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.131267 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.131273 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.131278 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.131284 | orchestrator | 2026-01-10 14:42:32.131290 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-01-10 14:42:32.131296 | orchestrator | Saturday 10 January 2026 14:32:23 +0000 (0:00:01.704) 0:01:24.456 ****** 2026-01-10 14:42:32.131302 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:42:32.131308 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:42:32.131313 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:42:32.131319 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:42:32.131325 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:42:32.131331 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:42:32.131336 | orchestrator | 2026-01-10 14:42:32.131343 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-01-10 14:42:32.131349 | orchestrator | Saturday 10 January 2026 14:32:24 +0000 (0:00:01.608) 0:01:26.064 ****** 2026-01-10 14:42:32.131355 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:42:32.131371 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:42:32.131377 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:42:32.131383 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:42:32.131389 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:42:32.131394 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:42:32.131400 | orchestrator | 2026-01-10 14:42:32.131406 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-01-10 14:42:32.131412 | orchestrator | Saturday 10 January 2026 14:32:27 +0000 (0:00:02.662) 0:01:28.727 ****** 2026-01-10 14:42:32.131419 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:42:32.131426 | orchestrator | 2026-01-10 14:42:32.131437 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-01-10 14:42:32.131465 | orchestrator | Saturday 10 January 2026 14:32:29 +0000 (0:00:01.430) 0:01:30.157 ****** 2026-01-10 14:42:32.131471 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.131477 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.131483 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.131490 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.131496 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.131502 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.131508 | orchestrator | 2026-01-10 14:42:32.131515 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-01-10 14:42:32.131521 | orchestrator | Saturday 10 January 2026 14:32:29 +0000 (0:00:00.603) 0:01:30.761 ****** 2026-01-10 14:42:32.131528 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.131534 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.131540 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.131546 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.131553 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.131559 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.131565 | orchestrator | 2026-01-10 14:42:32.131571 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-01-10 14:42:32.131578 | orchestrator | Saturday 10 January 2026 14:32:30 +0000 (0:00:00.700) 0:01:31.461 ****** 2026-01-10 14:42:32.131584 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-10 14:42:32.131590 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-10 14:42:32.131596 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-10 14:42:32.131601 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-10 14:42:32.131607 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-10 14:42:32.131613 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-10 14:42:32.131619 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-10 14:42:32.131625 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-10 14:42:32.131628 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-10 14:42:32.131632 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-10 14:42:32.131660 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-10 14:42:32.131665 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-10 14:42:32.131669 | orchestrator | 2026-01-10 14:42:32.131673 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-01-10 14:42:32.131677 | orchestrator | Saturday 10 January 2026 14:32:31 +0000 (0:00:01.461) 0:01:32.923 ****** 2026-01-10 14:42:32.131680 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:42:32.131689 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:42:32.131693 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:42:32.131697 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:42:32.131701 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:42:32.131707 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:42:32.131713 | orchestrator | 2026-01-10 14:42:32.131719 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-01-10 14:42:32.131725 | orchestrator | Saturday 10 January 2026 14:32:32 +0000 (0:00:01.120) 0:01:34.043 ****** 2026-01-10 14:42:32.131731 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.131737 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.131743 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.131749 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.131756 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.131762 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.131768 | orchestrator | 2026-01-10 14:42:32.131775 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-01-10 14:42:32.131782 | orchestrator | Saturday 10 January 2026 14:32:33 +0000 (0:00:00.590) 0:01:34.633 ****** 2026-01-10 14:42:32.131788 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.131795 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.131801 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.131807 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.131813 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.131819 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.131827 | orchestrator | 2026-01-10 14:42:32.131831 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-01-10 14:42:32.131835 | orchestrator | Saturday 10 January 2026 14:32:34 +0000 (0:00:00.660) 0:01:35.293 ****** 2026-01-10 14:42:32.131839 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.131843 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.131846 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.131850 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.131854 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.131857 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.131861 | orchestrator | 2026-01-10 14:42:32.131865 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-01-10 14:42:32.131868 | orchestrator | Saturday 10 January 2026 14:32:34 +0000 (0:00:00.518) 0:01:35.811 ****** 2026-01-10 14:42:32.131873 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:42:32.131877 | orchestrator | 2026-01-10 14:42:32.131881 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-01-10 14:42:32.131884 | orchestrator | Saturday 10 January 2026 14:32:35 +0000 (0:00:01.163) 0:01:36.975 ****** 2026-01-10 14:42:32.131888 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.131897 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.131901 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.131905 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.131909 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.131912 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.131917 | orchestrator | 2026-01-10 14:42:32.131923 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-01-10 14:42:32.131929 | orchestrator | Saturday 10 January 2026 14:33:25 +0000 (0:00:49.263) 0:02:26.238 ****** 2026-01-10 14:42:32.131936 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-10 14:42:32.131942 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-10 14:42:32.131947 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-10 14:42:32.131954 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.131961 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-10 14:42:32.131977 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-10 14:42:32.131982 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-10 14:42:32.131986 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.131989 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-10 14:42:32.131994 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-10 14:42:32.132000 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-10 14:42:32.132006 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.132013 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-10 14:42:32.132018 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-10 14:42:32.132024 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-10 14:42:32.132031 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.132037 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-10 14:42:32.132044 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-10 14:42:32.132050 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-10 14:42:32.132056 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.132085 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-10 14:42:32.132093 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-10 14:42:32.132099 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-10 14:42:32.132105 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.132111 | orchestrator | 2026-01-10 14:42:32.132117 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-01-10 14:42:32.132123 | orchestrator | Saturday 10 January 2026 14:33:25 +0000 (0:00:00.679) 0:02:26.918 ****** 2026-01-10 14:42:32.132129 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.132136 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.132142 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.132149 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.132155 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.132161 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.132167 | orchestrator | 2026-01-10 14:42:32.132174 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-01-10 14:42:32.132180 | orchestrator | Saturday 10 January 2026 14:33:26 +0000 (0:00:00.788) 0:02:27.707 ****** 2026-01-10 14:42:32.132186 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.132192 | orchestrator | 2026-01-10 14:42:32.132198 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-01-10 14:42:32.132204 | orchestrator | Saturday 10 January 2026 14:33:26 +0000 (0:00:00.147) 0:02:27.854 ****** 2026-01-10 14:42:32.132211 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.132217 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.132223 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.132229 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.132234 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.132241 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.132247 | orchestrator | 2026-01-10 14:42:32.132253 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-01-10 14:42:32.132260 | orchestrator | Saturday 10 January 2026 14:33:27 +0000 (0:00:00.671) 0:02:28.525 ****** 2026-01-10 14:42:32.132266 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.132272 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.132278 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.132285 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.132291 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.132304 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.132308 | orchestrator | 2026-01-10 14:42:32.132311 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-01-10 14:42:32.132315 | orchestrator | Saturday 10 January 2026 14:33:28 +0000 (0:00:00.915) 0:02:29.441 ****** 2026-01-10 14:42:32.132319 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.132323 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.132326 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.132330 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.132334 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.132338 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.132341 | orchestrator | 2026-01-10 14:42:32.132345 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-01-10 14:42:32.132349 | orchestrator | Saturday 10 January 2026 14:33:29 +0000 (0:00:00.736) 0:02:30.178 ****** 2026-01-10 14:42:32.132353 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.132356 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.132360 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.132364 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.132372 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.132376 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.132380 | orchestrator | 2026-01-10 14:42:32.132383 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-01-10 14:42:32.132387 | orchestrator | Saturday 10 January 2026 14:33:32 +0000 (0:00:02.956) 0:02:33.134 ****** 2026-01-10 14:42:32.132391 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.132395 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.132399 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.132402 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.132406 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.132409 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.132413 | orchestrator | 2026-01-10 14:42:32.132417 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-01-10 14:42:32.132420 | orchestrator | Saturday 10 January 2026 14:33:32 +0000 (0:00:00.690) 0:02:33.824 ****** 2026-01-10 14:42:32.132425 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:42:32.132430 | orchestrator | 2026-01-10 14:42:32.132434 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-01-10 14:42:32.132438 | orchestrator | Saturday 10 January 2026 14:33:33 +0000 (0:00:01.242) 0:02:35.067 ****** 2026-01-10 14:42:32.132460 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.132466 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.132472 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.132478 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.132484 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.132491 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.132495 | orchestrator | 2026-01-10 14:42:32.132499 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-01-10 14:42:32.132502 | orchestrator | Saturday 10 January 2026 14:33:34 +0000 (0:00:00.772) 0:02:35.839 ****** 2026-01-10 14:42:32.132506 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.132510 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.132514 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.132517 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.132521 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.132525 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.132528 | orchestrator | 2026-01-10 14:42:32.132532 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-01-10 14:42:32.132536 | orchestrator | Saturday 10 January 2026 14:33:35 +0000 (0:00:00.659) 0:02:36.499 ****** 2026-01-10 14:42:32.132540 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.132544 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.132570 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.132575 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.132579 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.132583 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.132586 | orchestrator | 2026-01-10 14:42:32.132590 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-01-10 14:42:32.132594 | orchestrator | Saturday 10 January 2026 14:33:36 +0000 (0:00:00.851) 0:02:37.351 ****** 2026-01-10 14:42:32.132598 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.132602 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.132605 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.132609 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.132613 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.132617 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.132620 | orchestrator | 2026-01-10 14:42:32.132624 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-01-10 14:42:32.132628 | orchestrator | Saturday 10 January 2026 14:33:36 +0000 (0:00:00.690) 0:02:38.042 ****** 2026-01-10 14:42:32.132632 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.132635 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.132639 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.132643 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.132647 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.132650 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.132654 | orchestrator | 2026-01-10 14:42:32.132658 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-01-10 14:42:32.132661 | orchestrator | Saturday 10 January 2026 14:33:38 +0000 (0:00:01.043) 0:02:39.085 ****** 2026-01-10 14:42:32.132665 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.132669 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.132672 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.132678 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.132685 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.132691 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.132697 | orchestrator | 2026-01-10 14:42:32.132703 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-01-10 14:42:32.132710 | orchestrator | Saturday 10 January 2026 14:33:38 +0000 (0:00:00.692) 0:02:39.778 ****** 2026-01-10 14:42:32.132716 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.132723 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.132729 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.132735 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.132742 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.132748 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.132754 | orchestrator | 2026-01-10 14:42:32.132761 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-01-10 14:42:32.132769 | orchestrator | Saturday 10 January 2026 14:33:39 +0000 (0:00:00.755) 0:02:40.533 ****** 2026-01-10 14:42:32.132775 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.132782 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.132788 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.132794 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.132800 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.132807 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.132814 | orchestrator | 2026-01-10 14:42:32.132820 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-01-10 14:42:32.132826 | orchestrator | Saturday 10 January 2026 14:33:40 +0000 (0:00:00.734) 0:02:41.268 ****** 2026-01-10 14:42:32.132832 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.132844 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.132851 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.132858 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.132864 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.132876 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.132882 | orchestrator | 2026-01-10 14:42:32.132888 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-01-10 14:42:32.132895 | orchestrator | Saturday 10 January 2026 14:33:41 +0000 (0:00:01.357) 0:02:42.625 ****** 2026-01-10 14:42:32.132902 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:42:32.132909 | orchestrator | 2026-01-10 14:42:32.132915 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-01-10 14:42:32.132922 | orchestrator | Saturday 10 January 2026 14:33:42 +0000 (0:00:01.210) 0:02:43.835 ****** 2026-01-10 14:42:32.132928 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-01-10 14:42:32.132935 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-01-10 14:42:32.132941 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-01-10 14:42:32.132947 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-01-10 14:42:32.132954 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-01-10 14:42:32.132961 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-01-10 14:42:32.132970 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-01-10 14:42:32.132981 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-01-10 14:42:32.132987 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-01-10 14:42:32.132993 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-01-10 14:42:32.132999 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-01-10 14:42:32.133005 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-01-10 14:42:32.133010 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-01-10 14:42:32.133017 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-01-10 14:42:32.133022 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-01-10 14:42:32.133029 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-01-10 14:42:32.133036 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-01-10 14:42:32.133042 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-01-10 14:42:32.133082 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-01-10 14:42:32.133090 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-01-10 14:42:32.133096 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-01-10 14:42:32.133102 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-01-10 14:42:32.133108 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-01-10 14:42:32.133115 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-01-10 14:42:32.133121 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-01-10 14:42:32.133127 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-01-10 14:42:32.133133 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-01-10 14:42:32.133139 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-01-10 14:42:32.133145 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-01-10 14:42:32.133151 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-01-10 14:42:32.133157 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-01-10 14:42:32.133164 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-01-10 14:42:32.133170 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-01-10 14:42:32.133176 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-01-10 14:42:32.133182 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-01-10 14:42:32.133188 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-01-10 14:42:32.133201 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-01-10 14:42:32.133207 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-01-10 14:42:32.133213 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-01-10 14:42:32.133219 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-01-10 14:42:32.133225 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-01-10 14:42:32.133231 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-01-10 14:42:32.133238 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-10 14:42:32.133244 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-01-10 14:42:32.133250 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-01-10 14:42:32.133256 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-01-10 14:42:32.133262 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-10 14:42:32.133268 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-01-10 14:42:32.133274 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-10 14:42:32.133280 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-10 14:42:32.133287 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-01-10 14:42:32.133293 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-10 14:42:32.133304 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-10 14:42:32.133311 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-10 14:42:32.133317 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-10 14:42:32.133323 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-01-10 14:42:32.133329 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-10 14:42:32.133335 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-10 14:42:32.133340 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-10 14:42:32.133347 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-10 14:42:32.133353 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-10 14:42:32.133359 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-10 14:42:32.133365 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-10 14:42:32.133371 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-10 14:42:32.133377 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-10 14:42:32.133384 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-10 14:42:32.133389 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-10 14:42:32.133395 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-10 14:42:32.133401 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-10 14:42:32.133407 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-10 14:42:32.133413 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-10 14:42:32.133420 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-10 14:42:32.133426 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-10 14:42:32.133432 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-01-10 14:42:32.133438 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-10 14:42:32.133489 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-10 14:42:32.133529 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-10 14:42:32.133536 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-10 14:42:32.133541 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-01-10 14:42:32.133547 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-10 14:42:32.133553 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-10 14:42:32.133559 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-10 14:42:32.133564 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-10 14:42:32.133570 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-01-10 14:42:32.133576 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-10 14:42:32.133582 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-10 14:42:32.133587 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-01-10 14:42:32.133593 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-01-10 14:42:32.133599 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-01-10 14:42:32.133604 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-01-10 14:42:32.133610 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-01-10 14:42:32.133616 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-10 14:42:32.133622 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-01-10 14:42:32.133628 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-01-10 14:42:32.133634 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-01-10 14:42:32.133640 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-01-10 14:42:32.133646 | orchestrator | 2026-01-10 14:42:32.133652 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-01-10 14:42:32.133658 | orchestrator | Saturday 10 January 2026 14:33:50 +0000 (0:00:07.961) 0:02:51.797 ****** 2026-01-10 14:42:32.133664 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.133669 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.133675 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.133681 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:42:32.133688 | orchestrator | 2026-01-10 14:42:32.133695 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-01-10 14:42:32.133701 | orchestrator | Saturday 10 January 2026 14:33:51 +0000 (0:00:00.958) 0:02:52.755 ****** 2026-01-10 14:42:32.133708 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-10 14:42:32.133713 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-10 14:42:32.133720 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-10 14:42:32.133724 | orchestrator | 2026-01-10 14:42:32.133728 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-01-10 14:42:32.133732 | orchestrator | Saturday 10 January 2026 14:33:52 +0000 (0:00:00.879) 0:02:53.634 ****** 2026-01-10 14:42:32.133736 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-10 14:42:32.133739 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-10 14:42:32.133743 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-10 14:42:32.133751 | orchestrator | 2026-01-10 14:42:32.133755 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-01-10 14:42:32.133760 | orchestrator | Saturday 10 January 2026 14:33:54 +0000 (0:00:01.729) 0:02:55.363 ****** 2026-01-10 14:42:32.133766 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.133772 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.133778 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.133784 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.133790 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.133796 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.133802 | orchestrator | 2026-01-10 14:42:32.133808 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-01-10 14:42:32.133815 | orchestrator | Saturday 10 January 2026 14:33:55 +0000 (0:00:01.020) 0:02:56.384 ****** 2026-01-10 14:42:32.133821 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.133827 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.133834 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.133838 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.133842 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.133845 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.133849 | orchestrator | 2026-01-10 14:42:32.133853 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-01-10 14:42:32.133857 | orchestrator | Saturday 10 January 2026 14:33:56 +0000 (0:00:01.270) 0:02:57.654 ****** 2026-01-10 14:42:32.133860 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.133864 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.133868 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.133871 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.133875 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.133879 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.133883 | orchestrator | 2026-01-10 14:42:32.133909 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-01-10 14:42:32.133916 | orchestrator | Saturday 10 January 2026 14:33:57 +0000 (0:00:00.726) 0:02:58.381 ****** 2026-01-10 14:42:32.133923 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.133929 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.133935 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.133941 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.133947 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.133954 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.133960 | orchestrator | 2026-01-10 14:42:32.133966 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-01-10 14:42:32.133972 | orchestrator | Saturday 10 January 2026 14:33:58 +0000 (0:00:00.731) 0:02:59.113 ****** 2026-01-10 14:42:32.133978 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.133984 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.133990 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.133996 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.134002 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.134009 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.134050 | orchestrator | 2026-01-10 14:42:32.134057 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-01-10 14:42:32.134064 | orchestrator | Saturday 10 January 2026 14:33:58 +0000 (0:00:00.764) 0:02:59.877 ****** 2026-01-10 14:42:32.134070 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.134077 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.134083 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.134089 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.134096 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.134102 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.134108 | orchestrator | 2026-01-10 14:42:32.134115 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-01-10 14:42:32.134121 | orchestrator | Saturday 10 January 2026 14:33:59 +0000 (0:00:00.875) 0:03:00.753 ****** 2026-01-10 14:42:32.134137 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.134144 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.134150 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.134156 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.134163 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.134169 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.134175 | orchestrator | 2026-01-10 14:42:32.134181 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-01-10 14:42:32.134188 | orchestrator | Saturday 10 January 2026 14:34:00 +0000 (0:00:00.673) 0:03:01.426 ****** 2026-01-10 14:42:32.134194 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.134200 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.134206 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.134211 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.134217 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.134223 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.134229 | orchestrator | 2026-01-10 14:42:32.134234 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-01-10 14:42:32.134240 | orchestrator | Saturday 10 January 2026 14:34:01 +0000 (0:00:00.919) 0:03:02.345 ****** 2026-01-10 14:42:32.134246 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.134252 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.134258 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.134270 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.134277 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.134283 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.134290 | orchestrator | 2026-01-10 14:42:32.134296 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-01-10 14:42:32.134302 | orchestrator | Saturday 10 January 2026 14:34:05 +0000 (0:00:03.882) 0:03:06.228 ****** 2026-01-10 14:42:32.134309 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.134315 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.134321 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.134327 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.134333 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.134339 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.134345 | orchestrator | 2026-01-10 14:42:32.134351 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-01-10 14:42:32.134357 | orchestrator | Saturday 10 January 2026 14:34:05 +0000 (0:00:00.738) 0:03:06.967 ****** 2026-01-10 14:42:32.134363 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.134370 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.134376 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.134383 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.134389 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.134395 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.134402 | orchestrator | 2026-01-10 14:42:32.134408 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-01-10 14:42:32.134415 | orchestrator | Saturday 10 January 2026 14:34:06 +0000 (0:00:00.573) 0:03:07.540 ****** 2026-01-10 14:42:32.134421 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.134427 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.134433 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.134459 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.134466 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.134472 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.134478 | orchestrator | 2026-01-10 14:42:32.134485 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-01-10 14:42:32.134491 | orchestrator | Saturday 10 January 2026 14:34:07 +0000 (0:00:00.835) 0:03:08.376 ****** 2026-01-10 14:42:32.134497 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-10 14:42:32.134510 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-10 14:42:32.134516 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-10 14:42:32.134522 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.134556 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.134563 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.134570 | orchestrator | 2026-01-10 14:42:32.134576 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-01-10 14:42:32.134582 | orchestrator | Saturday 10 January 2026 14:34:08 +0000 (0:00:00.746) 0:03:09.123 ****** 2026-01-10 14:42:32.134591 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-01-10 14:42:32.134600 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-01-10 14:42:32.134608 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.134614 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-01-10 14:42:32.134621 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-01-10 14:42:32.134627 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-01-10 14:42:32.134634 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-01-10 14:42:32.134641 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.134647 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.134658 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.134664 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.134670 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.134677 | orchestrator | 2026-01-10 14:42:32.134683 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-01-10 14:42:32.134689 | orchestrator | Saturday 10 January 2026 14:34:09 +0000 (0:00:00.973) 0:03:10.096 ****** 2026-01-10 14:42:32.134695 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.134702 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.134708 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.134714 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.134721 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.134727 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.134750 | orchestrator | 2026-01-10 14:42:32.134756 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-01-10 14:42:32.134769 | orchestrator | Saturday 10 January 2026 14:34:09 +0000 (0:00:00.620) 0:03:10.716 ****** 2026-01-10 14:42:32.134773 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.134776 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.134782 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.134789 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.134795 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.134801 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.134807 | orchestrator | 2026-01-10 14:42:32.134814 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-10 14:42:32.134820 | orchestrator | Saturday 10 January 2026 14:34:10 +0000 (0:00:00.771) 0:03:11.488 ****** 2026-01-10 14:42:32.134825 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.134831 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.134837 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.134843 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.134848 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.134854 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.134860 | orchestrator | 2026-01-10 14:42:32.134866 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-10 14:42:32.134872 | orchestrator | Saturday 10 January 2026 14:34:11 +0000 (0:00:00.657) 0:03:12.145 ****** 2026-01-10 14:42:32.134877 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.134883 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.134889 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.134894 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.134900 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.134907 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.134912 | orchestrator | 2026-01-10 14:42:32.134919 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-10 14:42:32.134952 | orchestrator | Saturday 10 January 2026 14:34:11 +0000 (0:00:00.690) 0:03:12.836 ****** 2026-01-10 14:42:32.134959 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.134965 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.134971 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.134977 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.134983 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.134989 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.134995 | orchestrator | 2026-01-10 14:42:32.135001 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-10 14:42:32.135007 | orchestrator | Saturday 10 January 2026 14:34:12 +0000 (0:00:00.612) 0:03:13.448 ****** 2026-01-10 14:42:32.135013 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.135019 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.135024 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.135030 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.135036 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.135041 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.135047 | orchestrator | 2026-01-10 14:42:32.135053 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-10 14:42:32.135059 | orchestrator | Saturday 10 January 2026 14:34:13 +0000 (0:00:00.892) 0:03:14.341 ****** 2026-01-10 14:42:32.135065 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:42:32.135071 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:42:32.135078 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:42:32.135083 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.135089 | orchestrator | 2026-01-10 14:42:32.135095 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-10 14:42:32.135100 | orchestrator | Saturday 10 January 2026 14:34:13 +0000 (0:00:00.439) 0:03:14.780 ****** 2026-01-10 14:42:32.135107 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:42:32.135120 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:42:32.135126 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:42:32.135132 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.135137 | orchestrator | 2026-01-10 14:42:32.135143 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-10 14:42:32.135149 | orchestrator | Saturday 10 January 2026 14:34:14 +0000 (0:00:00.355) 0:03:15.136 ****** 2026-01-10 14:42:32.135155 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:42:32.135160 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:42:32.135166 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:42:32.135172 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.135178 | orchestrator | 2026-01-10 14:42:32.135185 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-10 14:42:32.135191 | orchestrator | Saturday 10 January 2026 14:34:14 +0000 (0:00:00.350) 0:03:15.487 ****** 2026-01-10 14:42:32.135196 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.135202 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.135208 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.135214 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.135219 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.135225 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.135231 | orchestrator | 2026-01-10 14:42:32.135242 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-10 14:42:32.135248 | orchestrator | Saturday 10 January 2026 14:34:15 +0000 (0:00:00.649) 0:03:16.136 ****** 2026-01-10 14:42:32.135254 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-10 14:42:32.135260 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-10 14:42:32.135266 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-10 14:42:32.135272 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-01-10 14:42:32.135277 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.135283 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-01-10 14:42:32.135289 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.135295 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-01-10 14:42:32.135300 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.135306 | orchestrator | 2026-01-10 14:42:32.135312 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-01-10 14:42:32.135318 | orchestrator | Saturday 10 January 2026 14:34:17 +0000 (0:00:02.344) 0:03:18.481 ****** 2026-01-10 14:42:32.135324 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:42:32.135330 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:42:32.135336 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:42:32.135342 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:42:32.135348 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:42:32.135353 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:42:32.135359 | orchestrator | 2026-01-10 14:42:32.135365 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-10 14:42:32.135370 | orchestrator | Saturday 10 January 2026 14:34:20 +0000 (0:00:03.435) 0:03:21.917 ****** 2026-01-10 14:42:32.135376 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:42:32.135381 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:42:32.135387 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:42:32.135392 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:42:32.135398 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:42:32.135404 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:42:32.135409 | orchestrator | 2026-01-10 14:42:32.135415 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-10 14:42:32.135421 | orchestrator | Saturday 10 January 2026 14:34:22 +0000 (0:00:01.271) 0:03:23.188 ****** 2026-01-10 14:42:32.135427 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.135433 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.135439 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.135495 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:42:32.135502 | orchestrator | 2026-01-10 14:42:32.135508 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-10 14:42:32.135545 | orchestrator | Saturday 10 January 2026 14:34:23 +0000 (0:00:01.183) 0:03:24.371 ****** 2026-01-10 14:42:32.135554 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.135560 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.135567 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.135573 | orchestrator | 2026-01-10 14:42:32.135579 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-10 14:42:32.135585 | orchestrator | Saturday 10 January 2026 14:34:23 +0000 (0:00:00.370) 0:03:24.741 ****** 2026-01-10 14:42:32.135591 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:42:32.135597 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:42:32.135603 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:42:32.135609 | orchestrator | 2026-01-10 14:42:32.135615 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-10 14:42:32.135621 | orchestrator | Saturday 10 January 2026 14:34:25 +0000 (0:00:01.526) 0:03:26.268 ****** 2026-01-10 14:42:32.135627 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-10 14:42:32.135634 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-10 14:42:32.135639 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-10 14:42:32.135645 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.135651 | orchestrator | 2026-01-10 14:42:32.135657 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-10 14:42:32.135663 | orchestrator | Saturday 10 January 2026 14:34:25 +0000 (0:00:00.625) 0:03:26.893 ****** 2026-01-10 14:42:32.135669 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.135674 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.135680 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.135686 | orchestrator | 2026-01-10 14:42:32.135693 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-10 14:42:32.135698 | orchestrator | Saturday 10 January 2026 14:34:26 +0000 (0:00:00.378) 0:03:27.272 ****** 2026-01-10 14:42:32.135705 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.135711 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.135717 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.135724 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:42:32.135730 | orchestrator | 2026-01-10 14:42:32.135736 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-10 14:42:32.135742 | orchestrator | Saturday 10 January 2026 14:34:27 +0000 (0:00:01.137) 0:03:28.409 ****** 2026-01-10 14:42:32.135748 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:42:32.135755 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:42:32.135761 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:42:32.135767 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.135773 | orchestrator | 2026-01-10 14:42:32.135779 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-10 14:42:32.135785 | orchestrator | Saturday 10 January 2026 14:34:27 +0000 (0:00:00.418) 0:03:28.827 ****** 2026-01-10 14:42:32.135791 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.135797 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.135803 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.135809 | orchestrator | 2026-01-10 14:42:32.135815 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-10 14:42:32.135821 | orchestrator | Saturday 10 January 2026 14:34:28 +0000 (0:00:00.350) 0:03:29.178 ****** 2026-01-10 14:42:32.135833 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.135839 | orchestrator | 2026-01-10 14:42:32.135851 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-10 14:42:32.135858 | orchestrator | Saturday 10 January 2026 14:34:28 +0000 (0:00:00.210) 0:03:29.389 ****** 2026-01-10 14:42:32.135864 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.135870 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.135876 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.135882 | orchestrator | 2026-01-10 14:42:32.135887 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-10 14:42:32.135894 | orchestrator | Saturday 10 January 2026 14:34:28 +0000 (0:00:00.353) 0:03:29.742 ****** 2026-01-10 14:42:32.135899 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.135905 | orchestrator | 2026-01-10 14:42:32.135911 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-10 14:42:32.135918 | orchestrator | Saturday 10 January 2026 14:34:28 +0000 (0:00:00.237) 0:03:29.980 ****** 2026-01-10 14:42:32.135923 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.135929 | orchestrator | 2026-01-10 14:42:32.135935 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-10 14:42:32.135941 | orchestrator | Saturday 10 January 2026 14:34:29 +0000 (0:00:00.281) 0:03:30.261 ****** 2026-01-10 14:42:32.135947 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.135953 | orchestrator | 2026-01-10 14:42:32.135959 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-10 14:42:32.135965 | orchestrator | Saturday 10 January 2026 14:34:29 +0000 (0:00:00.119) 0:03:30.381 ****** 2026-01-10 14:42:32.135972 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.135978 | orchestrator | 2026-01-10 14:42:32.135984 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-10 14:42:32.135991 | orchestrator | Saturday 10 January 2026 14:34:30 +0000 (0:00:00.763) 0:03:31.144 ****** 2026-01-10 14:42:32.135997 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.136003 | orchestrator | 2026-01-10 14:42:32.136009 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-10 14:42:32.136015 | orchestrator | Saturday 10 January 2026 14:34:30 +0000 (0:00:00.313) 0:03:31.458 ****** 2026-01-10 14:42:32.136021 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:42:32.136027 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:42:32.136033 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:42:32.136040 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.136046 | orchestrator | 2026-01-10 14:42:32.136052 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-10 14:42:32.136082 | orchestrator | Saturday 10 January 2026 14:34:30 +0000 (0:00:00.498) 0:03:31.956 ****** 2026-01-10 14:42:32.136089 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.136095 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.136101 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.136108 | orchestrator | 2026-01-10 14:42:32.136114 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-10 14:42:32.136120 | orchestrator | Saturday 10 January 2026 14:34:31 +0000 (0:00:00.473) 0:03:32.429 ****** 2026-01-10 14:42:32.136126 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.136132 | orchestrator | 2026-01-10 14:42:32.136139 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-10 14:42:32.136145 | orchestrator | Saturday 10 January 2026 14:34:31 +0000 (0:00:00.239) 0:03:32.668 ****** 2026-01-10 14:42:32.136151 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.136157 | orchestrator | 2026-01-10 14:42:32.136163 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-10 14:42:32.136169 | orchestrator | Saturday 10 January 2026 14:34:31 +0000 (0:00:00.217) 0:03:32.886 ****** 2026-01-10 14:42:32.136175 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.136180 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.136186 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.136200 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:42:32.136206 | orchestrator | 2026-01-10 14:42:32.136212 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-10 14:42:32.136217 | orchestrator | Saturday 10 January 2026 14:34:32 +0000 (0:00:01.123) 0:03:34.009 ****** 2026-01-10 14:42:32.136223 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.136229 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.136235 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.136241 | orchestrator | 2026-01-10 14:42:32.136247 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-10 14:42:32.136253 | orchestrator | Saturday 10 January 2026 14:34:33 +0000 (0:00:00.356) 0:03:34.366 ****** 2026-01-10 14:42:32.136259 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:42:32.136265 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:42:32.136271 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:42:32.136278 | orchestrator | 2026-01-10 14:42:32.136284 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-10 14:42:32.136290 | orchestrator | Saturday 10 January 2026 14:34:34 +0000 (0:00:01.235) 0:03:35.601 ****** 2026-01-10 14:42:32.136296 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:42:32.136302 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:42:32.136309 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:42:32.136315 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.136321 | orchestrator | 2026-01-10 14:42:32.136327 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-10 14:42:32.136333 | orchestrator | Saturday 10 January 2026 14:34:35 +0000 (0:00:00.921) 0:03:36.523 ****** 2026-01-10 14:42:32.136338 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.136344 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.136350 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.136357 | orchestrator | 2026-01-10 14:42:32.136363 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-10 14:42:32.136374 | orchestrator | Saturday 10 January 2026 14:34:36 +0000 (0:00:00.647) 0:03:37.170 ****** 2026-01-10 14:42:32.136380 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.136386 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.136393 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.136399 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:42:32.136405 | orchestrator | 2026-01-10 14:42:32.136411 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-10 14:42:32.136417 | orchestrator | Saturday 10 January 2026 14:34:36 +0000 (0:00:00.864) 0:03:38.035 ****** 2026-01-10 14:42:32.136424 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.136430 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.136436 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.136461 | orchestrator | 2026-01-10 14:42:32.136467 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-10 14:42:32.136472 | orchestrator | Saturday 10 January 2026 14:34:37 +0000 (0:00:00.576) 0:03:38.611 ****** 2026-01-10 14:42:32.136478 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:42:32.136483 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:42:32.136489 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:42:32.136495 | orchestrator | 2026-01-10 14:42:32.136501 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-10 14:42:32.136506 | orchestrator | Saturday 10 January 2026 14:34:38 +0000 (0:00:01.257) 0:03:39.869 ****** 2026-01-10 14:42:32.136513 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:42:32.136519 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:42:32.136525 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:42:32.136537 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.136544 | orchestrator | 2026-01-10 14:42:32.136550 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-10 14:42:32.136556 | orchestrator | Saturday 10 January 2026 14:34:39 +0000 (0:00:00.611) 0:03:40.481 ****** 2026-01-10 14:42:32.136562 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.136568 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.136574 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.136580 | orchestrator | 2026-01-10 14:42:32.136588 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-01-10 14:42:32.136594 | orchestrator | Saturday 10 January 2026 14:34:39 +0000 (0:00:00.352) 0:03:40.833 ****** 2026-01-10 14:42:32.136601 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.136607 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.136613 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.136619 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.136625 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.136660 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.136666 | orchestrator | 2026-01-10 14:42:32.136672 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-10 14:42:32.136678 | orchestrator | Saturday 10 January 2026 14:34:40 +0000 (0:00:00.947) 0:03:41.780 ****** 2026-01-10 14:42:32.136684 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.136690 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.136695 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.136701 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:42:32.136707 | orchestrator | 2026-01-10 14:42:32.136713 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-10 14:42:32.136719 | orchestrator | Saturday 10 January 2026 14:34:41 +0000 (0:00:00.873) 0:03:42.654 ****** 2026-01-10 14:42:32.136724 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.136730 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.136736 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.136741 | orchestrator | 2026-01-10 14:42:32.136748 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-10 14:42:32.136753 | orchestrator | Saturday 10 January 2026 14:34:42 +0000 (0:00:00.624) 0:03:43.279 ****** 2026-01-10 14:42:32.136759 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:42:32.136765 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:42:32.136771 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:42:32.136777 | orchestrator | 2026-01-10 14:42:32.136783 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-10 14:42:32.136789 | orchestrator | Saturday 10 January 2026 14:34:43 +0000 (0:00:01.331) 0:03:44.610 ****** 2026-01-10 14:42:32.136795 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-10 14:42:32.136801 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-10 14:42:32.136807 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-10 14:42:32.136814 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.136820 | orchestrator | 2026-01-10 14:42:32.136826 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-10 14:42:32.136832 | orchestrator | Saturday 10 January 2026 14:34:44 +0000 (0:00:00.695) 0:03:45.306 ****** 2026-01-10 14:42:32.136838 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.136845 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.136851 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.136857 | orchestrator | 2026-01-10 14:42:32.136862 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-01-10 14:42:32.136868 | orchestrator | 2026-01-10 14:42:32.136874 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-10 14:42:32.136879 | orchestrator | Saturday 10 January 2026 14:34:45 +0000 (0:00:00.830) 0:03:46.136 ****** 2026-01-10 14:42:32.136887 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:42:32.136898 | orchestrator | 2026-01-10 14:42:32.136904 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-10 14:42:32.136910 | orchestrator | Saturday 10 January 2026 14:34:45 +0000 (0:00:00.517) 0:03:46.653 ****** 2026-01-10 14:42:32.136916 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:42:32.136922 | orchestrator | 2026-01-10 14:42:32.136933 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-10 14:42:32.136939 | orchestrator | Saturday 10 January 2026 14:34:46 +0000 (0:00:00.528) 0:03:47.182 ****** 2026-01-10 14:42:32.136944 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.136950 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.136956 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.136962 | orchestrator | 2026-01-10 14:42:32.136967 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-10 14:42:32.136974 | orchestrator | Saturday 10 January 2026 14:34:47 +0000 (0:00:01.209) 0:03:48.391 ****** 2026-01-10 14:42:32.136980 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.136985 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.136991 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.136996 | orchestrator | 2026-01-10 14:42:32.137002 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-10 14:42:32.137007 | orchestrator | Saturday 10 January 2026 14:34:47 +0000 (0:00:00.340) 0:03:48.732 ****** 2026-01-10 14:42:32.137013 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.137019 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.137025 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.137031 | orchestrator | 2026-01-10 14:42:32.137036 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-10 14:42:32.137043 | orchestrator | Saturday 10 January 2026 14:34:47 +0000 (0:00:00.317) 0:03:49.050 ****** 2026-01-10 14:42:32.137048 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.137055 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.137061 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.137066 | orchestrator | 2026-01-10 14:42:32.137072 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-10 14:42:32.137077 | orchestrator | Saturday 10 January 2026 14:34:48 +0000 (0:00:00.288) 0:03:49.338 ****** 2026-01-10 14:42:32.137083 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.137089 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.137094 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.137101 | orchestrator | 2026-01-10 14:42:32.137107 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-10 14:42:32.137113 | orchestrator | Saturday 10 January 2026 14:34:49 +0000 (0:00:01.117) 0:03:50.456 ****** 2026-01-10 14:42:32.137119 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.137124 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.137130 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.137136 | orchestrator | 2026-01-10 14:42:32.137142 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-10 14:42:32.137148 | orchestrator | Saturday 10 January 2026 14:34:49 +0000 (0:00:00.322) 0:03:50.778 ****** 2026-01-10 14:42:32.137183 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.137191 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.137197 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.137203 | orchestrator | 2026-01-10 14:42:32.137209 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-10 14:42:32.137217 | orchestrator | Saturday 10 January 2026 14:34:50 +0000 (0:00:00.321) 0:03:51.099 ****** 2026-01-10 14:42:32.137224 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.137230 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.137237 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.137254 | orchestrator | 2026-01-10 14:42:32.137260 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-10 14:42:32.137265 | orchestrator | Saturday 10 January 2026 14:34:50 +0000 (0:00:00.821) 0:03:51.920 ****** 2026-01-10 14:42:32.137271 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.137277 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.137283 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.137289 | orchestrator | 2026-01-10 14:42:32.137294 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-10 14:42:32.137300 | orchestrator | Saturday 10 January 2026 14:34:52 +0000 (0:00:01.407) 0:03:53.327 ****** 2026-01-10 14:42:32.137306 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.137312 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.137318 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.137323 | orchestrator | 2026-01-10 14:42:32.137329 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-10 14:42:32.137335 | orchestrator | Saturday 10 January 2026 14:34:52 +0000 (0:00:00.371) 0:03:53.698 ****** 2026-01-10 14:42:32.137340 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.137346 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.137352 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.137357 | orchestrator | 2026-01-10 14:42:32.137363 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-10 14:42:32.137369 | orchestrator | Saturday 10 January 2026 14:34:53 +0000 (0:00:00.417) 0:03:54.116 ****** 2026-01-10 14:42:32.137375 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.137380 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.137386 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.137393 | orchestrator | 2026-01-10 14:42:32.137399 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-10 14:42:32.137405 | orchestrator | Saturday 10 January 2026 14:34:53 +0000 (0:00:00.315) 0:03:54.432 ****** 2026-01-10 14:42:32.137410 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.137416 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.137423 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.137429 | orchestrator | 2026-01-10 14:42:32.137435 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-10 14:42:32.137486 | orchestrator | Saturday 10 January 2026 14:34:53 +0000 (0:00:00.613) 0:03:55.046 ****** 2026-01-10 14:42:32.137493 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.137500 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.137506 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.137512 | orchestrator | 2026-01-10 14:42:32.137518 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-10 14:42:32.137524 | orchestrator | Saturday 10 January 2026 14:34:54 +0000 (0:00:00.409) 0:03:55.455 ****** 2026-01-10 14:42:32.137531 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.137537 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.137543 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.137549 | orchestrator | 2026-01-10 14:42:32.137560 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-10 14:42:32.137567 | orchestrator | Saturday 10 January 2026 14:34:54 +0000 (0:00:00.374) 0:03:55.829 ****** 2026-01-10 14:42:32.137573 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.137579 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.137586 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.137592 | orchestrator | 2026-01-10 14:42:32.137598 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-10 14:42:32.137604 | orchestrator | Saturday 10 January 2026 14:34:55 +0000 (0:00:00.333) 0:03:56.162 ****** 2026-01-10 14:42:32.137611 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.137617 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.137623 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.137629 | orchestrator | 2026-01-10 14:42:32.137635 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-10 14:42:32.137647 | orchestrator | Saturday 10 January 2026 14:34:55 +0000 (0:00:00.327) 0:03:56.490 ****** 2026-01-10 14:42:32.137654 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.137660 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.137666 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.137673 | orchestrator | 2026-01-10 14:42:32.137679 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-10 14:42:32.137685 | orchestrator | Saturday 10 January 2026 14:34:56 +0000 (0:00:00.666) 0:03:57.156 ****** 2026-01-10 14:42:32.137692 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.137698 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.137704 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.137710 | orchestrator | 2026-01-10 14:42:32.137716 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-01-10 14:42:32.137721 | orchestrator | Saturday 10 January 2026 14:34:56 +0000 (0:00:00.754) 0:03:57.911 ****** 2026-01-10 14:42:32.137727 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.137733 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.137739 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.137746 | orchestrator | 2026-01-10 14:42:32.137752 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-01-10 14:42:32.137758 | orchestrator | Saturday 10 January 2026 14:34:57 +0000 (0:00:00.364) 0:03:58.275 ****** 2026-01-10 14:42:32.137766 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:42:32.137773 | orchestrator | 2026-01-10 14:42:32.137779 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-01-10 14:42:32.137785 | orchestrator | Saturday 10 January 2026 14:34:58 +0000 (0:00:00.905) 0:03:59.181 ****** 2026-01-10 14:42:32.137791 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.137797 | orchestrator | 2026-01-10 14:42:32.137833 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-01-10 14:42:32.137846 | orchestrator | Saturday 10 January 2026 14:34:58 +0000 (0:00:00.208) 0:03:59.390 ****** 2026-01-10 14:42:32.137855 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-10 14:42:32.137861 | orchestrator | 2026-01-10 14:42:32.137867 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-01-10 14:42:32.137873 | orchestrator | Saturday 10 January 2026 14:34:59 +0000 (0:00:01.316) 0:04:00.706 ****** 2026-01-10 14:42:32.137879 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.137885 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.137891 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.137896 | orchestrator | 2026-01-10 14:42:32.137902 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-01-10 14:42:32.137907 | orchestrator | Saturday 10 January 2026 14:34:59 +0000 (0:00:00.366) 0:04:01.073 ****** 2026-01-10 14:42:32.137913 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.137919 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.137925 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.137931 | orchestrator | 2026-01-10 14:42:32.137937 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-01-10 14:42:32.137942 | orchestrator | Saturday 10 January 2026 14:35:00 +0000 (0:00:00.892) 0:04:01.965 ****** 2026-01-10 14:42:32.137948 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:42:32.137954 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:42:32.137960 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:42:32.137966 | orchestrator | 2026-01-10 14:42:32.137971 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-01-10 14:42:32.137977 | orchestrator | Saturday 10 January 2026 14:35:02 +0000 (0:00:01.391) 0:04:03.357 ****** 2026-01-10 14:42:32.137983 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:42:32.137989 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:42:32.137994 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:42:32.138000 | orchestrator | 2026-01-10 14:42:32.138006 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-01-10 14:42:32.138058 | orchestrator | Saturday 10 January 2026 14:35:03 +0000 (0:00:00.772) 0:04:04.129 ****** 2026-01-10 14:42:32.138066 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:42:32.138072 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:42:32.138078 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:42:32.138084 | orchestrator | 2026-01-10 14:42:32.138090 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-01-10 14:42:32.138096 | orchestrator | Saturday 10 January 2026 14:35:04 +0000 (0:00:00.947) 0:04:05.077 ****** 2026-01-10 14:42:32.138102 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.138108 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.138114 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.138120 | orchestrator | 2026-01-10 14:42:32.138126 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-01-10 14:42:32.138132 | orchestrator | Saturday 10 January 2026 14:35:04 +0000 (0:00:00.825) 0:04:05.902 ****** 2026-01-10 14:42:32.138138 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:42:32.138144 | orchestrator | 2026-01-10 14:42:32.138150 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-01-10 14:42:32.138157 | orchestrator | Saturday 10 January 2026 14:35:06 +0000 (0:00:01.870) 0:04:07.773 ****** 2026-01-10 14:42:32.138163 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.138169 | orchestrator | 2026-01-10 14:42:32.138175 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-01-10 14:42:32.138186 | orchestrator | Saturday 10 January 2026 14:35:07 +0000 (0:00:00.765) 0:04:08.539 ****** 2026-01-10 14:42:32.138193 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:42:32.138199 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:42:32.138204 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-10 14:42:32.138211 | orchestrator | changed: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-10 14:42:32.138217 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-10 14:42:32.138224 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-01-10 14:42:32.138230 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-10 14:42:32.138236 | orchestrator | changed: [testbed-node-2 -> {{ item }}] 2026-01-10 14:42:32.138242 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-10 14:42:32.138248 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-01-10 14:42:32.138255 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-10 14:42:32.138261 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-01-10 14:42:32.138267 | orchestrator | 2026-01-10 14:42:32.138273 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-01-10 14:42:32.138279 | orchestrator | Saturday 10 January 2026 14:35:10 +0000 (0:00:03.513) 0:04:12.052 ****** 2026-01-10 14:42:32.138285 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:42:32.138290 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:42:32.138296 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:42:32.138302 | orchestrator | 2026-01-10 14:42:32.138308 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-01-10 14:42:32.138314 | orchestrator | Saturday 10 January 2026 14:35:12 +0000 (0:00:01.215) 0:04:13.267 ****** 2026-01-10 14:42:32.138320 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.138325 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.138331 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.138337 | orchestrator | 2026-01-10 14:42:32.138343 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-01-10 14:42:32.138349 | orchestrator | Saturday 10 January 2026 14:35:12 +0000 (0:00:00.394) 0:04:13.662 ****** 2026-01-10 14:42:32.138355 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.138362 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.138368 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.138381 | orchestrator | 2026-01-10 14:42:32.138388 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-01-10 14:42:32.138394 | orchestrator | Saturday 10 January 2026 14:35:13 +0000 (0:00:00.701) 0:04:14.364 ****** 2026-01-10 14:42:32.138400 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:42:32.138433 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:42:32.138462 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:42:32.138468 | orchestrator | 2026-01-10 14:42:32.138474 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-01-10 14:42:32.138480 | orchestrator | Saturday 10 January 2026 14:35:14 +0000 (0:00:01.679) 0:04:16.043 ****** 2026-01-10 14:42:32.138486 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:42:32.138492 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:42:32.138498 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:42:32.138504 | orchestrator | 2026-01-10 14:42:32.138509 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-01-10 14:42:32.138515 | orchestrator | Saturday 10 January 2026 14:35:16 +0000 (0:00:01.490) 0:04:17.534 ****** 2026-01-10 14:42:32.138521 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.138527 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.138532 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.138539 | orchestrator | 2026-01-10 14:42:32.138545 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-01-10 14:42:32.138550 | orchestrator | Saturday 10 January 2026 14:35:16 +0000 (0:00:00.469) 0:04:18.003 ****** 2026-01-10 14:42:32.138556 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:42:32.138563 | orchestrator | 2026-01-10 14:42:32.138568 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-01-10 14:42:32.138574 | orchestrator | Saturday 10 January 2026 14:35:18 +0000 (0:00:01.197) 0:04:19.201 ****** 2026-01-10 14:42:32.138580 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.138586 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.138591 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.138597 | orchestrator | 2026-01-10 14:42:32.138603 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-01-10 14:42:32.138609 | orchestrator | Saturday 10 January 2026 14:35:18 +0000 (0:00:00.719) 0:04:19.920 ****** 2026-01-10 14:42:32.138615 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.138621 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.138627 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.138633 | orchestrator | 2026-01-10 14:42:32.138639 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-01-10 14:42:32.138646 | orchestrator | Saturday 10 January 2026 14:35:19 +0000 (0:00:00.408) 0:04:20.329 ****** 2026-01-10 14:42:32.138652 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:42:32.138659 | orchestrator | 2026-01-10 14:42:32.138665 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-01-10 14:42:32.138671 | orchestrator | Saturday 10 January 2026 14:35:20 +0000 (0:00:00.748) 0:04:21.078 ****** 2026-01-10 14:42:32.138677 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:42:32.138682 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:42:32.138688 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:42:32.138695 | orchestrator | 2026-01-10 14:42:32.138701 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-01-10 14:42:32.138707 | orchestrator | Saturday 10 January 2026 14:35:21 +0000 (0:00:01.709) 0:04:22.787 ****** 2026-01-10 14:42:32.138713 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:42:32.138720 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:42:32.138731 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:42:32.138737 | orchestrator | 2026-01-10 14:42:32.138743 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-01-10 14:42:32.138755 | orchestrator | Saturday 10 January 2026 14:35:23 +0000 (0:00:01.416) 0:04:24.203 ****** 2026-01-10 14:42:32.138762 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:42:32.138768 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:42:32.138774 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:42:32.138780 | orchestrator | 2026-01-10 14:42:32.138786 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-01-10 14:42:32.138792 | orchestrator | Saturday 10 January 2026 14:35:25 +0000 (0:00:01.924) 0:04:26.128 ****** 2026-01-10 14:42:32.138798 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:42:32.138805 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:42:32.138811 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:42:32.138817 | orchestrator | 2026-01-10 14:42:32.138823 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-01-10 14:42:32.138830 | orchestrator | Saturday 10 January 2026 14:35:27 +0000 (0:00:02.350) 0:04:28.478 ****** 2026-01-10 14:42:32.138836 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:42:32.138842 | orchestrator | 2026-01-10 14:42:32.138848 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-01-10 14:42:32.138854 | orchestrator | Saturday 10 January 2026 14:35:28 +0000 (0:00:00.623) 0:04:29.102 ****** 2026-01-10 14:42:32.138860 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-01-10 14:42:32.138866 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.138872 | orchestrator | 2026-01-10 14:42:32.138878 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-01-10 14:42:32.138883 | orchestrator | Saturday 10 January 2026 14:35:50 +0000 (0:00:22.013) 0:04:51.115 ****** 2026-01-10 14:42:32.138889 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.138894 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.138900 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.138906 | orchestrator | 2026-01-10 14:42:32.138911 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-01-10 14:42:32.138917 | orchestrator | Saturday 10 January 2026 14:35:59 +0000 (0:00:09.259) 0:05:00.375 ****** 2026-01-10 14:42:32.138922 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.138928 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.138934 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.138940 | orchestrator | 2026-01-10 14:42:32.138947 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-01-10 14:42:32.138979 | orchestrator | Saturday 10 January 2026 14:35:59 +0000 (0:00:00.492) 0:05:00.867 ****** 2026-01-10 14:42:32.138988 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b9bd061c0c3f79d90fd16ada373e7faf93133c2e'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-01-10 14:42:32.138997 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b9bd061c0c3f79d90fd16ada373e7faf93133c2e'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-01-10 14:42:32.139005 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b9bd061c0c3f79d90fd16ada373e7faf93133c2e'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-01-10 14:42:32.139013 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b9bd061c0c3f79d90fd16ada373e7faf93133c2e'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-01-10 14:42:32.139026 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b9bd061c0c3f79d90fd16ada373e7faf93133c2e'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-01-10 14:42:32.139037 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b9bd061c0c3f79d90fd16ada373e7faf93133c2e'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__b9bd061c0c3f79d90fd16ada373e7faf93133c2e'}])  2026-01-10 14:42:32.139046 | orchestrator | 2026-01-10 14:42:32.139052 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-10 14:42:32.139058 | orchestrator | Saturday 10 January 2026 14:36:14 +0000 (0:00:14.404) 0:05:15.272 ****** 2026-01-10 14:42:32.139064 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.139070 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.139076 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.139082 | orchestrator | 2026-01-10 14:42:32.139088 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-10 14:42:32.139093 | orchestrator | Saturday 10 January 2026 14:36:14 +0000 (0:00:00.333) 0:05:15.605 ****** 2026-01-10 14:42:32.139100 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:42:32.139106 | orchestrator | 2026-01-10 14:42:32.139112 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-10 14:42:32.139119 | orchestrator | Saturday 10 January 2026 14:36:15 +0000 (0:00:00.853) 0:05:16.459 ****** 2026-01-10 14:42:32.139125 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.139131 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.139137 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.139144 | orchestrator | 2026-01-10 14:42:32.139150 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-10 14:42:32.139156 | orchestrator | Saturday 10 January 2026 14:36:15 +0000 (0:00:00.313) 0:05:16.772 ****** 2026-01-10 14:42:32.139162 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.139168 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.139174 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.139180 | orchestrator | 2026-01-10 14:42:32.139186 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-10 14:42:32.139192 | orchestrator | Saturday 10 January 2026 14:36:16 +0000 (0:00:00.360) 0:05:17.133 ****** 2026-01-10 14:42:32.139199 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-10 14:42:32.139205 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-10 14:42:32.139211 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-10 14:42:32.139216 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.139223 | orchestrator | 2026-01-10 14:42:32.139229 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-10 14:42:32.139236 | orchestrator | Saturday 10 January 2026 14:36:17 +0000 (0:00:01.169) 0:05:18.303 ****** 2026-01-10 14:42:32.139242 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.139248 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.139278 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.139286 | orchestrator | 2026-01-10 14:42:32.139291 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-01-10 14:42:32.139303 | orchestrator | 2026-01-10 14:42:32.139309 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-10 14:42:32.139316 | orchestrator | Saturday 10 January 2026 14:36:17 +0000 (0:00:00.578) 0:05:18.881 ****** 2026-01-10 14:42:32.139322 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:42:32.139329 | orchestrator | 2026-01-10 14:42:32.139336 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-10 14:42:32.139342 | orchestrator | Saturday 10 January 2026 14:36:18 +0000 (0:00:00.499) 0:05:19.381 ****** 2026-01-10 14:42:32.139348 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:42:32.139354 | orchestrator | 2026-01-10 14:42:32.139361 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-10 14:42:32.139367 | orchestrator | Saturday 10 January 2026 14:36:19 +0000 (0:00:00.771) 0:05:20.153 ****** 2026-01-10 14:42:32.139373 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.139379 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.139385 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.139391 | orchestrator | 2026-01-10 14:42:32.139398 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-10 14:42:32.139404 | orchestrator | Saturday 10 January 2026 14:36:19 +0000 (0:00:00.818) 0:05:20.971 ****** 2026-01-10 14:42:32.139410 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.139416 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.139422 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.139428 | orchestrator | 2026-01-10 14:42:32.139435 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-10 14:42:32.139485 | orchestrator | Saturday 10 January 2026 14:36:20 +0000 (0:00:00.367) 0:05:21.338 ****** 2026-01-10 14:42:32.139490 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.139494 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.139498 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.139502 | orchestrator | 2026-01-10 14:42:32.139505 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-10 14:42:32.139509 | orchestrator | Saturday 10 January 2026 14:36:20 +0000 (0:00:00.616) 0:05:21.955 ****** 2026-01-10 14:42:32.139513 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.139517 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.139521 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.139524 | orchestrator | 2026-01-10 14:42:32.139528 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-10 14:42:32.139532 | orchestrator | Saturday 10 January 2026 14:36:21 +0000 (0:00:00.326) 0:05:22.281 ****** 2026-01-10 14:42:32.139535 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.139539 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.139543 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.139547 | orchestrator | 2026-01-10 14:42:32.139550 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-10 14:42:32.139554 | orchestrator | Saturday 10 January 2026 14:36:21 +0000 (0:00:00.719) 0:05:23.001 ****** 2026-01-10 14:42:32.139558 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.139566 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.139570 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.139573 | orchestrator | 2026-01-10 14:42:32.139577 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-10 14:42:32.139581 | orchestrator | Saturday 10 January 2026 14:36:22 +0000 (0:00:00.389) 0:05:23.390 ****** 2026-01-10 14:42:32.139584 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.139590 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.139596 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.139602 | orchestrator | 2026-01-10 14:42:32.139608 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-10 14:42:32.139619 | orchestrator | Saturday 10 January 2026 14:36:22 +0000 (0:00:00.613) 0:05:24.004 ****** 2026-01-10 14:42:32.139624 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.139630 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.139636 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.139641 | orchestrator | 2026-01-10 14:42:32.139648 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-10 14:42:32.139654 | orchestrator | Saturday 10 January 2026 14:36:23 +0000 (0:00:00.841) 0:05:24.845 ****** 2026-01-10 14:42:32.139660 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.139666 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.139672 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.139679 | orchestrator | 2026-01-10 14:42:32.139684 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-10 14:42:32.139691 | orchestrator | Saturday 10 January 2026 14:36:24 +0000 (0:00:00.790) 0:05:25.636 ****** 2026-01-10 14:42:32.139697 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.139703 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.139709 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.139714 | orchestrator | 2026-01-10 14:42:32.139720 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-10 14:42:32.139726 | orchestrator | Saturday 10 January 2026 14:36:24 +0000 (0:00:00.296) 0:05:25.932 ****** 2026-01-10 14:42:32.139732 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.139738 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.139743 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.139749 | orchestrator | 2026-01-10 14:42:32.139754 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-10 14:42:32.139760 | orchestrator | Saturday 10 January 2026 14:36:25 +0000 (0:00:00.607) 0:05:26.540 ****** 2026-01-10 14:42:32.139766 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.139771 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.139777 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.139783 | orchestrator | 2026-01-10 14:42:32.139789 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-10 14:42:32.139824 | orchestrator | Saturday 10 January 2026 14:36:25 +0000 (0:00:00.316) 0:05:26.857 ****** 2026-01-10 14:42:32.139831 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.139837 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.139843 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.139849 | orchestrator | 2026-01-10 14:42:32.139855 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-10 14:42:32.139861 | orchestrator | Saturday 10 January 2026 14:36:26 +0000 (0:00:00.343) 0:05:27.200 ****** 2026-01-10 14:42:32.139867 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.139872 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.139878 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.139884 | orchestrator | 2026-01-10 14:42:32.139890 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-10 14:42:32.139896 | orchestrator | Saturday 10 January 2026 14:36:26 +0000 (0:00:00.327) 0:05:27.527 ****** 2026-01-10 14:42:32.139902 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.139909 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.139914 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.139921 | orchestrator | 2026-01-10 14:42:32.139927 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-10 14:42:32.139932 | orchestrator | Saturday 10 January 2026 14:36:26 +0000 (0:00:00.308) 0:05:27.836 ****** 2026-01-10 14:42:32.139938 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.139944 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.139950 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.139956 | orchestrator | 2026-01-10 14:42:32.139962 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-10 14:42:32.139968 | orchestrator | Saturday 10 January 2026 14:36:27 +0000 (0:00:00.614) 0:05:28.451 ****** 2026-01-10 14:42:32.139980 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.139985 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.139991 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.139997 | orchestrator | 2026-01-10 14:42:32.140003 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-10 14:42:32.140009 | orchestrator | Saturday 10 January 2026 14:36:27 +0000 (0:00:00.320) 0:05:28.771 ****** 2026-01-10 14:42:32.140015 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.140021 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.140027 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.140032 | orchestrator | 2026-01-10 14:42:32.140038 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-10 14:42:32.140044 | orchestrator | Saturday 10 January 2026 14:36:28 +0000 (0:00:00.382) 0:05:29.154 ****** 2026-01-10 14:42:32.140050 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.140055 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.140061 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.140067 | orchestrator | 2026-01-10 14:42:32.140073 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-01-10 14:42:32.140078 | orchestrator | Saturday 10 January 2026 14:36:28 +0000 (0:00:00.802) 0:05:29.956 ****** 2026-01-10 14:42:32.140084 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-10 14:42:32.140091 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-10 14:42:32.140098 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-10 14:42:32.140103 | orchestrator | 2026-01-10 14:42:32.140109 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-01-10 14:42:32.140120 | orchestrator | Saturday 10 January 2026 14:36:29 +0000 (0:00:00.658) 0:05:30.615 ****** 2026-01-10 14:42:32.140126 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:42:32.140133 | orchestrator | 2026-01-10 14:42:32.140138 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-01-10 14:42:32.140144 | orchestrator | Saturday 10 January 2026 14:36:30 +0000 (0:00:00.502) 0:05:31.118 ****** 2026-01-10 14:42:32.140150 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:42:32.140156 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:42:32.140161 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:42:32.140167 | orchestrator | 2026-01-10 14:42:32.140173 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-01-10 14:42:32.140178 | orchestrator | Saturday 10 January 2026 14:36:30 +0000 (0:00:00.661) 0:05:31.779 ****** 2026-01-10 14:42:32.140184 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.140190 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.140196 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.140202 | orchestrator | 2026-01-10 14:42:32.140208 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-01-10 14:42:32.140213 | orchestrator | Saturday 10 January 2026 14:36:31 +0000 (0:00:00.623) 0:05:32.403 ****** 2026-01-10 14:42:32.140217 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-10 14:42:32.140223 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-10 14:42:32.140229 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-10 14:42:32.140235 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-01-10 14:42:32.140241 | orchestrator | 2026-01-10 14:42:32.140247 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-01-10 14:42:32.140252 | orchestrator | Saturday 10 January 2026 14:36:40 +0000 (0:00:09.253) 0:05:41.657 ****** 2026-01-10 14:42:32.140258 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.140265 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.140271 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.140276 | orchestrator | 2026-01-10 14:42:32.140282 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-01-10 14:42:32.140297 | orchestrator | Saturday 10 January 2026 14:36:40 +0000 (0:00:00.378) 0:05:42.035 ****** 2026-01-10 14:42:32.140304 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-10 14:42:32.140310 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-10 14:42:32.140316 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-10 14:42:32.140323 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-10 14:42:32.140328 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:42:32.140361 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:42:32.140369 | orchestrator | 2026-01-10 14:42:32.140375 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-01-10 14:42:32.140382 | orchestrator | Saturday 10 January 2026 14:36:43 +0000 (0:00:02.272) 0:05:44.308 ****** 2026-01-10 14:42:32.140387 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-10 14:42:32.140395 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-10 14:42:32.140399 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-10 14:42:32.140402 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-10 14:42:32.140406 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-10 14:42:32.140410 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-10 14:42:32.140414 | orchestrator | 2026-01-10 14:42:32.140417 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-01-10 14:42:32.140421 | orchestrator | Saturday 10 January 2026 14:36:44 +0000 (0:00:01.393) 0:05:45.701 ****** 2026-01-10 14:42:32.140428 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.140434 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.140458 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.140464 | orchestrator | 2026-01-10 14:42:32.140470 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-01-10 14:42:32.140475 | orchestrator | Saturday 10 January 2026 14:36:45 +0000 (0:00:01.198) 0:05:46.900 ****** 2026-01-10 14:42:32.140480 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.140486 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.140492 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.140498 | orchestrator | 2026-01-10 14:42:32.140504 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-01-10 14:42:32.140510 | orchestrator | Saturday 10 January 2026 14:36:46 +0000 (0:00:00.307) 0:05:47.207 ****** 2026-01-10 14:42:32.140516 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.140523 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.140529 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.140535 | orchestrator | 2026-01-10 14:42:32.140541 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-01-10 14:42:32.140547 | orchestrator | Saturday 10 January 2026 14:36:46 +0000 (0:00:00.355) 0:05:47.563 ****** 2026-01-10 14:42:32.140553 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:42:32.140559 | orchestrator | 2026-01-10 14:42:32.140563 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-01-10 14:42:32.140567 | orchestrator | Saturday 10 January 2026 14:36:47 +0000 (0:00:00.825) 0:05:48.389 ****** 2026-01-10 14:42:32.140570 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.140574 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.140578 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.140582 | orchestrator | 2026-01-10 14:42:32.140586 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-01-10 14:42:32.140589 | orchestrator | Saturday 10 January 2026 14:36:47 +0000 (0:00:00.423) 0:05:48.813 ****** 2026-01-10 14:42:32.140593 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.140597 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.140600 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.140604 | orchestrator | 2026-01-10 14:42:32.140608 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-01-10 14:42:32.140627 | orchestrator | Saturday 10 January 2026 14:36:48 +0000 (0:00:00.325) 0:05:49.138 ****** 2026-01-10 14:42:32.140631 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:42:32.140635 | orchestrator | 2026-01-10 14:42:32.140639 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-01-10 14:42:32.140642 | orchestrator | Saturday 10 January 2026 14:36:48 +0000 (0:00:00.796) 0:05:49.935 ****** 2026-01-10 14:42:32.140646 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:42:32.140650 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:42:32.140654 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:42:32.140657 | orchestrator | 2026-01-10 14:42:32.140661 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-01-10 14:42:32.140665 | orchestrator | Saturday 10 January 2026 14:36:50 +0000 (0:00:01.348) 0:05:51.283 ****** 2026-01-10 14:42:32.140669 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:42:32.140672 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:42:32.140676 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:42:32.140680 | orchestrator | 2026-01-10 14:42:32.140684 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-01-10 14:42:32.140687 | orchestrator | Saturday 10 January 2026 14:36:51 +0000 (0:00:01.293) 0:05:52.577 ****** 2026-01-10 14:42:32.140691 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:42:32.140695 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:42:32.140699 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:42:32.140703 | orchestrator | 2026-01-10 14:42:32.140706 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-01-10 14:42:32.140710 | orchestrator | Saturday 10 January 2026 14:36:53 +0000 (0:00:01.837) 0:05:54.415 ****** 2026-01-10 14:42:32.140714 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:42:32.140718 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:42:32.140721 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:42:32.140725 | orchestrator | 2026-01-10 14:42:32.140729 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-01-10 14:42:32.140732 | orchestrator | Saturday 10 January 2026 14:36:55 +0000 (0:00:02.325) 0:05:56.741 ****** 2026-01-10 14:42:32.140736 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.140740 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.140744 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-01-10 14:42:32.140747 | orchestrator | 2026-01-10 14:42:32.140751 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-01-10 14:42:32.140755 | orchestrator | Saturday 10 January 2026 14:36:56 +0000 (0:00:00.428) 0:05:57.169 ****** 2026-01-10 14:42:32.140777 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-01-10 14:42:32.140782 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-01-10 14:42:32.140786 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-01-10 14:42:32.140790 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-01-10 14:42:32.140794 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-01-10 14:42:32.140797 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2026-01-10 14:42:32.140801 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:42:32.140805 | orchestrator | 2026-01-10 14:42:32.140809 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-01-10 14:42:32.140813 | orchestrator | Saturday 10 January 2026 14:37:32 +0000 (0:00:36.536) 0:06:33.706 ****** 2026-01-10 14:42:32.140817 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:42:32.140823 | orchestrator | 2026-01-10 14:42:32.140827 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-01-10 14:42:32.140831 | orchestrator | Saturday 10 January 2026 14:37:34 +0000 (0:00:01.424) 0:06:35.130 ****** 2026-01-10 14:42:32.140835 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.140839 | orchestrator | 2026-01-10 14:42:32.140842 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-01-10 14:42:32.140846 | orchestrator | Saturday 10 January 2026 14:37:34 +0000 (0:00:00.329) 0:06:35.459 ****** 2026-01-10 14:42:32.140850 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.140854 | orchestrator | 2026-01-10 14:42:32.140858 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-01-10 14:42:32.140861 | orchestrator | Saturday 10 January 2026 14:37:34 +0000 (0:00:00.155) 0:06:35.614 ****** 2026-01-10 14:42:32.140865 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-01-10 14:42:32.140869 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-01-10 14:42:32.140873 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-01-10 14:42:32.140876 | orchestrator | 2026-01-10 14:42:32.140880 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-01-10 14:42:32.140884 | orchestrator | Saturday 10 January 2026 14:37:41 +0000 (0:00:06.703) 0:06:42.318 ****** 2026-01-10 14:42:32.140888 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-01-10 14:42:32.140891 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-01-10 14:42:32.140895 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-01-10 14:42:32.140899 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-01-10 14:42:32.140903 | orchestrator | 2026-01-10 14:42:32.140906 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-10 14:42:32.140913 | orchestrator | Saturday 10 January 2026 14:37:46 +0000 (0:00:05.398) 0:06:47.716 ****** 2026-01-10 14:42:32.140917 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:42:32.140921 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:42:32.140924 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:42:32.140928 | orchestrator | 2026-01-10 14:42:32.140932 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-10 14:42:32.140936 | orchestrator | Saturday 10 January 2026 14:37:47 +0000 (0:00:00.795) 0:06:48.512 ****** 2026-01-10 14:42:32.140939 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:42:32.140943 | orchestrator | 2026-01-10 14:42:32.140947 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-10 14:42:32.140951 | orchestrator | Saturday 10 January 2026 14:37:48 +0000 (0:00:00.824) 0:06:49.337 ****** 2026-01-10 14:42:32.140954 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.140958 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.140962 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.140966 | orchestrator | 2026-01-10 14:42:32.140969 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-10 14:42:32.140973 | orchestrator | Saturday 10 January 2026 14:37:48 +0000 (0:00:00.341) 0:06:49.678 ****** 2026-01-10 14:42:32.140977 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:42:32.140981 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:42:32.140985 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:42:32.140988 | orchestrator | 2026-01-10 14:42:32.140992 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-10 14:42:32.140996 | orchestrator | Saturday 10 January 2026 14:37:49 +0000 (0:00:01.294) 0:06:50.973 ****** 2026-01-10 14:42:32.141000 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-10 14:42:32.141003 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-10 14:42:32.141011 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-10 14:42:32.141015 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.141019 | orchestrator | 2026-01-10 14:42:32.141023 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-10 14:42:32.141027 | orchestrator | Saturday 10 January 2026 14:37:50 +0000 (0:00:00.904) 0:06:51.877 ****** 2026-01-10 14:42:32.141030 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.141034 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.141038 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.141042 | orchestrator | 2026-01-10 14:42:32.141046 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-01-10 14:42:32.141049 | orchestrator | 2026-01-10 14:42:32.141053 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-10 14:42:32.141082 | orchestrator | Saturday 10 January 2026 14:37:51 +0000 (0:00:00.817) 0:06:52.694 ****** 2026-01-10 14:42:32.141087 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:42:32.141092 | orchestrator | 2026-01-10 14:42:32.141095 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-10 14:42:32.141099 | orchestrator | Saturday 10 January 2026 14:37:52 +0000 (0:00:00.522) 0:06:53.217 ****** 2026-01-10 14:42:32.141103 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:42:32.141107 | orchestrator | 2026-01-10 14:42:32.141110 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-10 14:42:32.141114 | orchestrator | Saturday 10 January 2026 14:37:53 +0000 (0:00:00.921) 0:06:54.138 ****** 2026-01-10 14:42:32.141118 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.141122 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.141125 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.141129 | orchestrator | 2026-01-10 14:42:32.141133 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-10 14:42:32.141136 | orchestrator | Saturday 10 January 2026 14:37:53 +0000 (0:00:00.358) 0:06:54.497 ****** 2026-01-10 14:42:32.141140 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.141144 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.141148 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.141151 | orchestrator | 2026-01-10 14:42:32.141155 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-10 14:42:32.141159 | orchestrator | Saturday 10 January 2026 14:37:54 +0000 (0:00:00.758) 0:06:55.255 ****** 2026-01-10 14:42:32.141162 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.141166 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.141170 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.141174 | orchestrator | 2026-01-10 14:42:32.141177 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-10 14:42:32.141181 | orchestrator | Saturday 10 January 2026 14:37:55 +0000 (0:00:00.832) 0:06:56.087 ****** 2026-01-10 14:42:32.141185 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.141189 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.141192 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.141196 | orchestrator | 2026-01-10 14:42:32.141200 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-10 14:42:32.141204 | orchestrator | Saturday 10 January 2026 14:37:56 +0000 (0:00:01.056) 0:06:57.144 ****** 2026-01-10 14:42:32.141208 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.141214 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.141220 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.141226 | orchestrator | 2026-01-10 14:42:32.141232 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-10 14:42:32.141238 | orchestrator | Saturday 10 January 2026 14:37:56 +0000 (0:00:00.316) 0:06:57.461 ****** 2026-01-10 14:42:32.141244 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.141250 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.141262 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.141266 | orchestrator | 2026-01-10 14:42:32.141270 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-10 14:42:32.141274 | orchestrator | Saturday 10 January 2026 14:37:56 +0000 (0:00:00.351) 0:06:57.812 ****** 2026-01-10 14:42:32.141280 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.141284 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.141290 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.141296 | orchestrator | 2026-01-10 14:42:32.141302 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-10 14:42:32.141308 | orchestrator | Saturday 10 January 2026 14:37:57 +0000 (0:00:00.343) 0:06:58.156 ****** 2026-01-10 14:42:32.141314 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.141320 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.141326 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.141332 | orchestrator | 2026-01-10 14:42:32.141338 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-10 14:42:32.141344 | orchestrator | Saturday 10 January 2026 14:37:58 +0000 (0:00:01.232) 0:06:59.389 ****** 2026-01-10 14:42:32.141350 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.141357 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.141362 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.141368 | orchestrator | 2026-01-10 14:42:32.141374 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-10 14:42:32.141380 | orchestrator | Saturday 10 January 2026 14:37:59 +0000 (0:00:00.773) 0:07:00.162 ****** 2026-01-10 14:42:32.141386 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.141393 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.141397 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.141401 | orchestrator | 2026-01-10 14:42:32.141404 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-10 14:42:32.141408 | orchestrator | Saturday 10 January 2026 14:37:59 +0000 (0:00:00.334) 0:07:00.497 ****** 2026-01-10 14:42:32.141412 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.141416 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.141419 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.141423 | orchestrator | 2026-01-10 14:42:32.141427 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-10 14:42:32.141430 | orchestrator | Saturday 10 January 2026 14:37:59 +0000 (0:00:00.305) 0:07:00.802 ****** 2026-01-10 14:42:32.141434 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.141438 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.141458 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.141464 | orchestrator | 2026-01-10 14:42:32.141471 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-10 14:42:32.141477 | orchestrator | Saturday 10 January 2026 14:38:00 +0000 (0:00:00.606) 0:07:01.409 ****** 2026-01-10 14:42:32.141483 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.141488 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.141495 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.141502 | orchestrator | 2026-01-10 14:42:32.141506 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-10 14:42:32.141514 | orchestrator | Saturday 10 January 2026 14:38:00 +0000 (0:00:00.405) 0:07:01.814 ****** 2026-01-10 14:42:32.141518 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.141524 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.141530 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.141536 | orchestrator | 2026-01-10 14:42:32.141542 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-10 14:42:32.141547 | orchestrator | Saturday 10 January 2026 14:38:01 +0000 (0:00:00.401) 0:07:02.216 ****** 2026-01-10 14:42:32.141553 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.141559 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.141565 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.141577 | orchestrator | 2026-01-10 14:42:32.141583 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-10 14:42:32.141589 | orchestrator | Saturday 10 January 2026 14:38:01 +0000 (0:00:00.317) 0:07:02.534 ****** 2026-01-10 14:42:32.141595 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.141602 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.141608 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.141614 | orchestrator | 2026-01-10 14:42:32.141619 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-10 14:42:32.141624 | orchestrator | Saturday 10 January 2026 14:38:02 +0000 (0:00:00.593) 0:07:03.128 ****** 2026-01-10 14:42:32.141631 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.141636 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.141642 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.141648 | orchestrator | 2026-01-10 14:42:32.141654 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-10 14:42:32.141661 | orchestrator | Saturday 10 January 2026 14:38:02 +0000 (0:00:00.341) 0:07:03.469 ****** 2026-01-10 14:42:32.141666 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.141672 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.141678 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.141683 | orchestrator | 2026-01-10 14:42:32.141689 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-10 14:42:32.141696 | orchestrator | Saturday 10 January 2026 14:38:02 +0000 (0:00:00.354) 0:07:03.824 ****** 2026-01-10 14:42:32.141702 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.141707 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.141713 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.141720 | orchestrator | 2026-01-10 14:42:32.141727 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-01-10 14:42:32.141732 | orchestrator | Saturday 10 January 2026 14:38:03 +0000 (0:00:00.753) 0:07:04.577 ****** 2026-01-10 14:42:32.141738 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.141744 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.141750 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.141756 | orchestrator | 2026-01-10 14:42:32.141762 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-01-10 14:42:32.141768 | orchestrator | Saturday 10 January 2026 14:38:03 +0000 (0:00:00.343) 0:07:04.921 ****** 2026-01-10 14:42:32.141773 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-10 14:42:32.141779 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-10 14:42:32.141785 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-10 14:42:32.141791 | orchestrator | 2026-01-10 14:42:32.141797 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-01-10 14:42:32.141807 | orchestrator | Saturday 10 January 2026 14:38:04 +0000 (0:00:00.651) 0:07:05.572 ****** 2026-01-10 14:42:32.141813 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:42:32.141819 | orchestrator | 2026-01-10 14:42:32.141825 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-01-10 14:42:32.141831 | orchestrator | Saturday 10 January 2026 14:38:05 +0000 (0:00:00.682) 0:07:06.255 ****** 2026-01-10 14:42:32.141837 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.141842 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.141848 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.141876 | orchestrator | 2026-01-10 14:42:32.141883 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-01-10 14:42:32.141896 | orchestrator | Saturday 10 January 2026 14:38:05 +0000 (0:00:00.607) 0:07:06.862 ****** 2026-01-10 14:42:32.141909 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.141914 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.141921 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.141933 | orchestrator | 2026-01-10 14:42:32.141939 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-01-10 14:42:32.141944 | orchestrator | Saturday 10 January 2026 14:38:06 +0000 (0:00:00.313) 0:07:07.176 ****** 2026-01-10 14:42:32.141950 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.141956 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.141961 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.141967 | orchestrator | 2026-01-10 14:42:32.141973 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-01-10 14:42:32.141978 | orchestrator | Saturday 10 January 2026 14:38:06 +0000 (0:00:00.698) 0:07:07.874 ****** 2026-01-10 14:42:32.141992 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.141998 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.142004 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.142010 | orchestrator | 2026-01-10 14:42:32.142055 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-01-10 14:42:32.142062 | orchestrator | Saturday 10 January 2026 14:38:07 +0000 (0:00:00.366) 0:07:08.240 ****** 2026-01-10 14:42:32.142068 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-10 14:42:32.142074 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-10 14:42:32.142080 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-10 14:42:32.142096 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-10 14:42:32.142102 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-10 14:42:32.142108 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-10 14:42:32.142114 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-10 14:42:32.142120 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-10 14:42:32.142126 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-10 14:42:32.142132 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-10 14:42:32.142137 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-10 14:42:32.142143 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-10 14:42:32.142149 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-10 14:42:32.142155 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-10 14:42:32.142161 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-10 14:42:32.142166 | orchestrator | 2026-01-10 14:42:32.142172 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-01-10 14:42:32.142178 | orchestrator | Saturday 10 January 2026 14:38:11 +0000 (0:00:04.244) 0:07:12.486 ****** 2026-01-10 14:42:32.142184 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.142190 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.142196 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.142202 | orchestrator | 2026-01-10 14:42:32.142208 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-01-10 14:42:32.142213 | orchestrator | Saturday 10 January 2026 14:38:11 +0000 (0:00:00.306) 0:07:12.792 ****** 2026-01-10 14:42:32.142219 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:42:32.142224 | orchestrator | 2026-01-10 14:42:32.142230 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-01-10 14:42:32.142236 | orchestrator | Saturday 10 January 2026 14:38:12 +0000 (0:00:00.519) 0:07:13.312 ****** 2026-01-10 14:42:32.142242 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-10 14:42:32.142253 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-10 14:42:32.142259 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-01-10 14:42:32.142265 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-01-10 14:42:32.142271 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-10 14:42:32.142278 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-01-10 14:42:32.142284 | orchestrator | 2026-01-10 14:42:32.142290 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-01-10 14:42:32.142299 | orchestrator | Saturday 10 January 2026 14:38:13 +0000 (0:00:01.448) 0:07:14.760 ****** 2026-01-10 14:42:32.142304 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:42:32.142310 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-10 14:42:32.142316 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-10 14:42:32.142322 | orchestrator | 2026-01-10 14:42:32.142328 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-01-10 14:42:32.142334 | orchestrator | Saturday 10 January 2026 14:38:16 +0000 (0:00:02.330) 0:07:17.090 ****** 2026-01-10 14:42:32.142340 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-10 14:42:32.142347 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-10 14:42:32.142353 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:42:32.142359 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-10 14:42:32.142365 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-10 14:42:32.142371 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:42:32.142378 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-10 14:42:32.142383 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-10 14:42:32.142389 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:42:32.142395 | orchestrator | 2026-01-10 14:42:32.142401 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-01-10 14:42:32.142407 | orchestrator | Saturday 10 January 2026 14:38:17 +0000 (0:00:01.175) 0:07:18.265 ****** 2026-01-10 14:42:32.142412 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:42:32.142418 | orchestrator | 2026-01-10 14:42:32.142425 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-01-10 14:42:32.142430 | orchestrator | Saturday 10 January 2026 14:38:19 +0000 (0:00:02.168) 0:07:20.434 ****** 2026-01-10 14:42:32.142436 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:42:32.142482 | orchestrator | 2026-01-10 14:42:32.142489 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-01-10 14:42:32.142495 | orchestrator | Saturday 10 January 2026 14:38:20 +0000 (0:00:00.828) 0:07:21.263 ****** 2026-01-10 14:42:32.142501 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca', 'data_vg': 'ceph-8bd1ebb6-f1fa-58d8-b8a2-53a51729cfca'}) 2026-01-10 14:42:32.142509 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-afcf3728-3a76-5607-aebb-61451d8643bd', 'data_vg': 'ceph-afcf3728-3a76-5607-aebb-61451d8643bd'}) 2026-01-10 14:42:32.142522 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-377cb61f-8fa6-58d2-888b-072b5e96ec0c', 'data_vg': 'ceph-377cb61f-8fa6-58d2-888b-072b5e96ec0c'}) 2026-01-10 14:42:32.142529 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-7d69473f-eeb6-5b22-bf27-181ed9eac77f', 'data_vg': 'ceph-7d69473f-eeb6-5b22-bf27-181ed9eac77f'}) 2026-01-10 14:42:32.142536 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-d6926eeb-1396-512c-9972-e44f7d919ea4', 'data_vg': 'ceph-d6926eeb-1396-512c-9972-e44f7d919ea4'}) 2026-01-10 14:42:32.142542 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-82a5292d-e4f5-5675-b04e-23ddf5e1abb7', 'data_vg': 'ceph-82a5292d-e4f5-5675-b04e-23ddf5e1abb7'}) 2026-01-10 14:42:32.142554 | orchestrator | 2026-01-10 14:42:32.142560 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-01-10 14:42:32.142566 | orchestrator | Saturday 10 January 2026 14:39:01 +0000 (0:00:41.639) 0:08:02.902 ****** 2026-01-10 14:42:32.142572 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.142578 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.142584 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.142590 | orchestrator | 2026-01-10 14:42:32.142597 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-01-10 14:42:32.142603 | orchestrator | Saturday 10 January 2026 14:39:02 +0000 (0:00:00.319) 0:08:03.221 ****** 2026-01-10 14:42:32.142609 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:42:32.142616 | orchestrator | 2026-01-10 14:42:32.142622 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-01-10 14:42:32.142627 | orchestrator | Saturday 10 January 2026 14:39:02 +0000 (0:00:00.799) 0:08:04.020 ****** 2026-01-10 14:42:32.142633 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.142639 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.142645 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.142651 | orchestrator | 2026-01-10 14:42:32.142657 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-01-10 14:42:32.142663 | orchestrator | Saturday 10 January 2026 14:39:03 +0000 (0:00:00.715) 0:08:04.736 ****** 2026-01-10 14:42:32.142668 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.142674 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.142680 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.142687 | orchestrator | 2026-01-10 14:42:32.142693 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-01-10 14:42:32.142699 | orchestrator | Saturday 10 January 2026 14:39:06 +0000 (0:00:02.940) 0:08:07.676 ****** 2026-01-10 14:42:32.142704 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:42:32.142711 | orchestrator | 2026-01-10 14:42:32.142717 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-01-10 14:42:32.142722 | orchestrator | Saturday 10 January 2026 14:39:07 +0000 (0:00:00.785) 0:08:08.462 ****** 2026-01-10 14:42:32.142729 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:42:32.142735 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:42:32.142741 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:42:32.142746 | orchestrator | 2026-01-10 14:42:32.142758 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-01-10 14:42:32.142764 | orchestrator | Saturday 10 January 2026 14:39:08 +0000 (0:00:01.302) 0:08:09.764 ****** 2026-01-10 14:42:32.142782 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:42:32.142796 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:42:32.142802 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:42:32.142808 | orchestrator | 2026-01-10 14:42:32.142814 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-01-10 14:42:32.142820 | orchestrator | Saturday 10 January 2026 14:39:09 +0000 (0:00:01.255) 0:08:11.019 ****** 2026-01-10 14:42:32.142826 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:42:32.142832 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:42:32.142837 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:42:32.142843 | orchestrator | 2026-01-10 14:42:32.142849 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-01-10 14:42:32.142855 | orchestrator | Saturday 10 January 2026 14:39:13 +0000 (0:00:03.219) 0:08:14.239 ****** 2026-01-10 14:42:32.142861 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.142867 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.142873 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.142879 | orchestrator | 2026-01-10 14:42:32.142885 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-01-10 14:42:32.142899 | orchestrator | Saturday 10 January 2026 14:39:13 +0000 (0:00:00.608) 0:08:14.847 ****** 2026-01-10 14:42:32.142905 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.142911 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.142917 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.142922 | orchestrator | 2026-01-10 14:42:32.142928 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-01-10 14:42:32.142934 | orchestrator | Saturday 10 January 2026 14:39:14 +0000 (0:00:00.364) 0:08:15.212 ****** 2026-01-10 14:42:32.142940 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-10 14:42:32.142945 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-01-10 14:42:32.142951 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-01-10 14:42:32.142957 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-01-10 14:42:32.142964 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-01-10 14:42:32.142969 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-01-10 14:42:32.142975 | orchestrator | 2026-01-10 14:42:32.142980 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-01-10 14:42:32.142987 | orchestrator | Saturday 10 January 2026 14:39:15 +0000 (0:00:01.197) 0:08:16.410 ****** 2026-01-10 14:42:32.142992 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-01-10 14:42:32.142999 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-01-10 14:42:32.143011 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-01-10 14:42:32.143017 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-01-10 14:42:32.143022 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-01-10 14:42:32.143028 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-01-10 14:42:32.143034 | orchestrator | 2026-01-10 14:42:32.143040 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-01-10 14:42:32.143045 | orchestrator | Saturday 10 January 2026 14:39:17 +0000 (0:00:02.589) 0:08:19.000 ****** 2026-01-10 14:42:32.143052 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-01-10 14:42:32.143058 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-01-10 14:42:32.143064 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-01-10 14:42:32.143070 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-01-10 14:42:32.143076 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-01-10 14:42:32.143081 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-01-10 14:42:32.143087 | orchestrator | 2026-01-10 14:42:32.143093 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-01-10 14:42:32.143099 | orchestrator | Saturday 10 January 2026 14:39:22 +0000 (0:00:04.544) 0:08:23.545 ****** 2026-01-10 14:42:32.143105 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.143110 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.143116 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:42:32.143122 | orchestrator | 2026-01-10 14:42:32.143128 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-01-10 14:42:32.143134 | orchestrator | Saturday 10 January 2026 14:39:25 +0000 (0:00:03.163) 0:08:26.708 ****** 2026-01-10 14:42:32.143140 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.143145 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.143151 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-01-10 14:42:32.143157 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:42:32.143163 | orchestrator | 2026-01-10 14:42:32.143169 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-01-10 14:42:32.143175 | orchestrator | Saturday 10 January 2026 14:39:38 +0000 (0:00:12.565) 0:08:39.273 ****** 2026-01-10 14:42:32.143180 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.143186 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.143191 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.143197 | orchestrator | 2026-01-10 14:42:32.143203 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-10 14:42:32.143214 | orchestrator | Saturday 10 January 2026 14:39:39 +0000 (0:00:01.138) 0:08:40.411 ****** 2026-01-10 14:42:32.143220 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.143226 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.143232 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.143238 | orchestrator | 2026-01-10 14:42:32.143243 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-10 14:42:32.143249 | orchestrator | Saturday 10 January 2026 14:39:39 +0000 (0:00:00.378) 0:08:40.790 ****** 2026-01-10 14:42:32.143255 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:42:32.143261 | orchestrator | 2026-01-10 14:42:32.143267 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-10 14:42:32.143277 | orchestrator | Saturday 10 January 2026 14:39:40 +0000 (0:00:00.804) 0:08:41.595 ****** 2026-01-10 14:42:32.143283 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:42:32.143289 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:42:32.143295 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:42:32.143301 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.143307 | orchestrator | 2026-01-10 14:42:32.143314 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-10 14:42:32.143319 | orchestrator | Saturday 10 January 2026 14:39:40 +0000 (0:00:00.411) 0:08:42.007 ****** 2026-01-10 14:42:32.143325 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.143331 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.143336 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.143342 | orchestrator | 2026-01-10 14:42:32.143348 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-10 14:42:32.143354 | orchestrator | Saturday 10 January 2026 14:39:41 +0000 (0:00:00.344) 0:08:42.352 ****** 2026-01-10 14:42:32.143359 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.143366 | orchestrator | 2026-01-10 14:42:32.143372 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-10 14:42:32.143378 | orchestrator | Saturday 10 January 2026 14:39:41 +0000 (0:00:00.255) 0:08:42.607 ****** 2026-01-10 14:42:32.143384 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.143390 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.143396 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.143402 | orchestrator | 2026-01-10 14:42:32.143408 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-10 14:42:32.143415 | orchestrator | Saturday 10 January 2026 14:39:41 +0000 (0:00:00.324) 0:08:42.931 ****** 2026-01-10 14:42:32.143421 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.143427 | orchestrator | 2026-01-10 14:42:32.143433 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-10 14:42:32.143459 | orchestrator | Saturday 10 January 2026 14:39:42 +0000 (0:00:00.233) 0:08:43.164 ****** 2026-01-10 14:42:32.143466 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.143472 | orchestrator | 2026-01-10 14:42:32.143479 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-10 14:42:32.143484 | orchestrator | Saturday 10 January 2026 14:39:42 +0000 (0:00:00.221) 0:08:43.386 ****** 2026-01-10 14:42:32.143490 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.143495 | orchestrator | 2026-01-10 14:42:32.143501 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-10 14:42:32.143513 | orchestrator | Saturday 10 January 2026 14:39:42 +0000 (0:00:00.115) 0:08:43.502 ****** 2026-01-10 14:42:32.143517 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.143521 | orchestrator | 2026-01-10 14:42:32.143525 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-10 14:42:32.143528 | orchestrator | Saturday 10 January 2026 14:39:43 +0000 (0:00:00.861) 0:08:44.364 ****** 2026-01-10 14:42:32.143538 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.143542 | orchestrator | 2026-01-10 14:42:32.143546 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-10 14:42:32.143549 | orchestrator | Saturday 10 January 2026 14:39:43 +0000 (0:00:00.239) 0:08:44.604 ****** 2026-01-10 14:42:32.143553 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:42:32.143557 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:42:32.143560 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:42:32.143564 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.143568 | orchestrator | 2026-01-10 14:42:32.143572 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-10 14:42:32.143582 | orchestrator | Saturday 10 January 2026 14:39:43 +0000 (0:00:00.392) 0:08:44.997 ****** 2026-01-10 14:42:32.143586 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.143590 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.143593 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.143597 | orchestrator | 2026-01-10 14:42:32.143601 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-10 14:42:32.143604 | orchestrator | Saturday 10 January 2026 14:39:44 +0000 (0:00:00.396) 0:08:45.393 ****** 2026-01-10 14:42:32.143608 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.143612 | orchestrator | 2026-01-10 14:42:32.143616 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-10 14:42:32.143619 | orchestrator | Saturday 10 January 2026 14:39:44 +0000 (0:00:00.235) 0:08:45.629 ****** 2026-01-10 14:42:32.143623 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.143627 | orchestrator | 2026-01-10 14:42:32.143630 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-01-10 14:42:32.143634 | orchestrator | 2026-01-10 14:42:32.143638 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-10 14:42:32.143641 | orchestrator | Saturday 10 January 2026 14:39:45 +0000 (0:00:00.928) 0:08:46.557 ****** 2026-01-10 14:42:32.143646 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:42:32.143652 | orchestrator | 2026-01-10 14:42:32.143655 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-10 14:42:32.143659 | orchestrator | Saturday 10 January 2026 14:39:46 +0000 (0:00:01.182) 0:08:47.740 ****** 2026-01-10 14:42:32.143663 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:42:32.143667 | orchestrator | 2026-01-10 14:42:32.143674 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-10 14:42:32.143680 | orchestrator | Saturday 10 January 2026 14:39:47 +0000 (0:00:01.236) 0:08:48.977 ****** 2026-01-10 14:42:32.143690 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.143697 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.143703 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.143709 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.143715 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.143721 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.143727 | orchestrator | 2026-01-10 14:42:32.143744 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-10 14:42:32.143750 | orchestrator | Saturday 10 January 2026 14:39:49 +0000 (0:00:01.166) 0:08:50.143 ****** 2026-01-10 14:42:32.143757 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.143763 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.143769 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.143775 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.143781 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.143787 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.143793 | orchestrator | 2026-01-10 14:42:32.143803 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-10 14:42:32.143809 | orchestrator | Saturday 10 January 2026 14:39:49 +0000 (0:00:00.803) 0:08:50.947 ****** 2026-01-10 14:42:32.143814 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.143820 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.143826 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.143832 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.143838 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.143844 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.143850 | orchestrator | 2026-01-10 14:42:32.143856 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-10 14:42:32.143861 | orchestrator | Saturday 10 January 2026 14:39:51 +0000 (0:00:01.197) 0:08:52.144 ****** 2026-01-10 14:42:32.143867 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.143873 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.143878 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.143884 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.143890 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.143896 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.143902 | orchestrator | 2026-01-10 14:42:32.143908 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-10 14:42:32.143913 | orchestrator | Saturday 10 January 2026 14:39:51 +0000 (0:00:00.848) 0:08:52.993 ****** 2026-01-10 14:42:32.143920 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.143926 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.143931 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.143937 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.143943 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.143948 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.143954 | orchestrator | 2026-01-10 14:42:32.143960 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-10 14:42:32.143970 | orchestrator | Saturday 10 January 2026 14:39:53 +0000 (0:00:01.328) 0:08:54.321 ****** 2026-01-10 14:42:32.143976 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.143982 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.143988 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.143993 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.143999 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.144004 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.144010 | orchestrator | 2026-01-10 14:42:32.144016 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-10 14:42:32.144022 | orchestrator | Saturday 10 January 2026 14:39:53 +0000 (0:00:00.700) 0:08:55.022 ****** 2026-01-10 14:42:32.144028 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.144034 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.144039 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.144045 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.144051 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.144056 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.144062 | orchestrator | 2026-01-10 14:42:32.144068 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-10 14:42:32.144073 | orchestrator | Saturday 10 January 2026 14:39:54 +0000 (0:00:00.873) 0:08:55.895 ****** 2026-01-10 14:42:32.144079 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.144085 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.144091 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.144097 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.144110 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.144117 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.144122 | orchestrator | 2026-01-10 14:42:32.144128 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-10 14:42:32.144134 | orchestrator | Saturday 10 January 2026 14:39:55 +0000 (0:00:01.029) 0:08:56.925 ****** 2026-01-10 14:42:32.144139 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.144150 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.144155 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.144161 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.144167 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.144173 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.144178 | orchestrator | 2026-01-10 14:42:32.144184 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-10 14:42:32.144190 | orchestrator | Saturday 10 January 2026 14:39:57 +0000 (0:00:01.370) 0:08:58.295 ****** 2026-01-10 14:42:32.144195 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.144201 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.144207 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.144213 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.144219 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.144225 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.144231 | orchestrator | 2026-01-10 14:42:32.144237 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-10 14:42:32.144243 | orchestrator | Saturday 10 January 2026 14:39:57 +0000 (0:00:00.580) 0:08:58.876 ****** 2026-01-10 14:42:32.144249 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.144255 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.144261 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.144267 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.144272 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.144278 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.144284 | orchestrator | 2026-01-10 14:42:32.144290 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-10 14:42:32.144296 | orchestrator | Saturday 10 January 2026 14:39:58 +0000 (0:00:00.904) 0:08:59.780 ****** 2026-01-10 14:42:32.144307 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.144313 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.144319 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.144325 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.144331 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.144337 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.144342 | orchestrator | 2026-01-10 14:42:32.144348 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-10 14:42:32.144353 | orchestrator | Saturday 10 January 2026 14:39:59 +0000 (0:00:00.647) 0:09:00.427 ****** 2026-01-10 14:42:32.144359 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.144365 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.144371 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.144377 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.144383 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.144389 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.144395 | orchestrator | 2026-01-10 14:42:32.144401 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-10 14:42:32.144406 | orchestrator | Saturday 10 January 2026 14:40:00 +0000 (0:00:00.857) 0:09:01.285 ****** 2026-01-10 14:42:32.144413 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.144418 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.144425 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.144431 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.144439 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.144464 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.144473 | orchestrator | 2026-01-10 14:42:32.144479 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-10 14:42:32.144486 | orchestrator | Saturday 10 January 2026 14:40:00 +0000 (0:00:00.646) 0:09:01.931 ****** 2026-01-10 14:42:32.144492 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.144498 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.144504 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.144509 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.144515 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.144527 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.144533 | orchestrator | 2026-01-10 14:42:32.144539 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-10 14:42:32.144545 | orchestrator | Saturday 10 January 2026 14:40:01 +0000 (0:00:00.970) 0:09:02.901 ****** 2026-01-10 14:42:32.144551 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.144557 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.144563 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.144568 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:32.144575 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:32.144581 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:32.144586 | orchestrator | 2026-01-10 14:42:32.144592 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-10 14:42:32.144604 | orchestrator | Saturday 10 January 2026 14:40:02 +0000 (0:00:00.610) 0:09:03.512 ****** 2026-01-10 14:42:32.144610 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.144616 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.144622 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.144628 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.144634 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.144640 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.144645 | orchestrator | 2026-01-10 14:42:32.144652 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-10 14:42:32.144657 | orchestrator | Saturday 10 January 2026 14:40:03 +0000 (0:00:00.946) 0:09:04.458 ****** 2026-01-10 14:42:32.144664 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.144670 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.144675 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.144681 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.144687 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.144693 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.144698 | orchestrator | 2026-01-10 14:42:32.144704 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-10 14:42:32.144710 | orchestrator | Saturday 10 January 2026 14:40:04 +0000 (0:00:00.675) 0:09:05.133 ****** 2026-01-10 14:42:32.144716 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.144721 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.144727 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.144733 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.144739 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.144745 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.144751 | orchestrator | 2026-01-10 14:42:32.144757 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-01-10 14:42:32.144764 | orchestrator | Saturday 10 January 2026 14:40:05 +0000 (0:00:01.582) 0:09:06.716 ****** 2026-01-10 14:42:32.144770 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:42:32.144777 | orchestrator | 2026-01-10 14:42:32.144783 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-01-10 14:42:32.144789 | orchestrator | Saturday 10 January 2026 14:40:09 +0000 (0:00:03.778) 0:09:10.494 ****** 2026-01-10 14:42:32.144795 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:42:32.144802 | orchestrator | 2026-01-10 14:42:32.144808 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-01-10 14:42:32.144814 | orchestrator | Saturday 10 January 2026 14:40:11 +0000 (0:00:01.799) 0:09:12.293 ****** 2026-01-10 14:42:32.144820 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:42:32.144827 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:42:32.144833 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:42:32.144839 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.144845 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:42:32.144851 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:42:32.144856 | orchestrator | 2026-01-10 14:42:32.144863 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-01-10 14:42:32.144869 | orchestrator | Saturday 10 January 2026 14:40:12 +0000 (0:00:01.567) 0:09:13.860 ****** 2026-01-10 14:42:32.144880 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:42:32.144885 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:42:32.144892 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:42:32.144898 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:42:32.144904 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:42:32.144909 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:42:32.144915 | orchestrator | 2026-01-10 14:42:32.144926 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-01-10 14:42:32.144932 | orchestrator | Saturday 10 January 2026 14:40:13 +0000 (0:00:00.916) 0:09:14.777 ****** 2026-01-10 14:42:32.144938 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:42:32.144946 | orchestrator | 2026-01-10 14:42:32.144952 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-01-10 14:42:32.144958 | orchestrator | Saturday 10 January 2026 14:40:14 +0000 (0:00:01.079) 0:09:15.857 ****** 2026-01-10 14:42:32.144963 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:42:32.144969 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:42:32.144975 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:42:32.144980 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:42:32.144986 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:42:32.144992 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:42:32.144998 | orchestrator | 2026-01-10 14:42:32.145003 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-01-10 14:42:32.145010 | orchestrator | Saturday 10 January 2026 14:40:16 +0000 (0:00:01.543) 0:09:17.400 ****** 2026-01-10 14:42:32.145016 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:42:32.145022 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:42:32.145028 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:42:32.145033 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:42:32.145039 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:42:32.145045 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:42:32.145050 | orchestrator | 2026-01-10 14:42:32.145056 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-01-10 14:42:32.145062 | orchestrator | Saturday 10 January 2026 14:40:19 +0000 (0:00:02.973) 0:09:20.374 ****** 2026-01-10 14:42:32.145067 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:42:32.145073 | orchestrator | 2026-01-10 14:42:32.145079 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-01-10 14:42:32.145085 | orchestrator | Saturday 10 January 2026 14:40:20 +0000 (0:00:01.251) 0:09:21.625 ****** 2026-01-10 14:42:32.145091 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.145096 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.145102 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.145108 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.145113 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.145119 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.145125 | orchestrator | 2026-01-10 14:42:32.145131 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-01-10 14:42:32.145141 | orchestrator | Saturday 10 January 2026 14:40:21 +0000 (0:00:00.870) 0:09:22.495 ****** 2026-01-10 14:42:32.145147 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:42:32.145153 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:42:32.145159 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:42:32.145165 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:42:32.145170 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:42:32.145176 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:42:32.145182 | orchestrator | 2026-01-10 14:42:32.145188 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-01-10 14:42:32.145194 | orchestrator | Saturday 10 January 2026 14:40:23 +0000 (0:00:02.281) 0:09:24.777 ****** 2026-01-10 14:42:32.145205 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.145211 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.145217 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.145223 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:32.145229 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:32.145236 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:32.145241 | orchestrator | 2026-01-10 14:42:32.145248 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-01-10 14:42:32.145254 | orchestrator | 2026-01-10 14:42:32.145260 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-10 14:42:32.145266 | orchestrator | Saturday 10 January 2026 14:40:24 +0000 (0:00:01.082) 0:09:25.859 ****** 2026-01-10 14:42:32.145273 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:42:32.145280 | orchestrator | 2026-01-10 14:42:32.145286 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-10 14:42:32.145291 | orchestrator | Saturday 10 January 2026 14:40:25 +0000 (0:00:00.570) 0:09:26.430 ****** 2026-01-10 14:42:32.145297 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:42:32.145304 | orchestrator | 2026-01-10 14:42:32.145309 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-10 14:42:32.145315 | orchestrator | Saturday 10 January 2026 14:40:26 +0000 (0:00:00.805) 0:09:27.235 ****** 2026-01-10 14:42:32.145322 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.145327 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.145334 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.145339 | orchestrator | 2026-01-10 14:42:32.145345 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-10 14:42:32.145351 | orchestrator | Saturday 10 January 2026 14:40:26 +0000 (0:00:00.319) 0:09:27.555 ****** 2026-01-10 14:42:32.145357 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.145363 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.145368 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.145373 | orchestrator | 2026-01-10 14:42:32.145378 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-10 14:42:32.145384 | orchestrator | Saturday 10 January 2026 14:40:27 +0000 (0:00:00.766) 0:09:28.322 ****** 2026-01-10 14:42:32.145389 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.145394 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.145399 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.145405 | orchestrator | 2026-01-10 14:42:32.145411 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-10 14:42:32.145421 | orchestrator | Saturday 10 January 2026 14:40:28 +0000 (0:00:01.129) 0:09:29.451 ****** 2026-01-10 14:42:32.145426 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.145433 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.145439 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.145487 | orchestrator | 2026-01-10 14:42:32.145493 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-10 14:42:32.145500 | orchestrator | Saturday 10 January 2026 14:40:29 +0000 (0:00:00.822) 0:09:30.274 ****** 2026-01-10 14:42:32.145506 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.145512 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.145518 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.145524 | orchestrator | 2026-01-10 14:42:32.145530 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-10 14:42:32.145536 | orchestrator | Saturday 10 January 2026 14:40:29 +0000 (0:00:00.355) 0:09:30.629 ****** 2026-01-10 14:42:32.145543 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.145547 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.145550 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.145554 | orchestrator | 2026-01-10 14:42:32.145558 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-10 14:42:32.145565 | orchestrator | Saturday 10 January 2026 14:40:29 +0000 (0:00:00.298) 0:09:30.928 ****** 2026-01-10 14:42:32.145569 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.145573 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.145577 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.145580 | orchestrator | 2026-01-10 14:42:32.145584 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-10 14:42:32.145588 | orchestrator | Saturday 10 January 2026 14:40:30 +0000 (0:00:00.695) 0:09:31.623 ****** 2026-01-10 14:42:32.145591 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.145595 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.145599 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.145604 | orchestrator | 2026-01-10 14:42:32.145610 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-10 14:42:32.145616 | orchestrator | Saturday 10 January 2026 14:40:31 +0000 (0:00:00.891) 0:09:32.514 ****** 2026-01-10 14:42:32.145622 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.145628 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.145633 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.145639 | orchestrator | 2026-01-10 14:42:32.145645 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-10 14:42:32.145651 | orchestrator | Saturday 10 January 2026 14:40:32 +0000 (0:00:00.852) 0:09:33.367 ****** 2026-01-10 14:42:32.145657 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.145662 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.145669 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.145675 | orchestrator | 2026-01-10 14:42:32.145681 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-10 14:42:32.145694 | orchestrator | Saturday 10 January 2026 14:40:32 +0000 (0:00:00.310) 0:09:33.678 ****** 2026-01-10 14:42:32.145700 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.145707 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.145713 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.145719 | orchestrator | 2026-01-10 14:42:32.145725 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-10 14:42:32.145732 | orchestrator | Saturday 10 January 2026 14:40:33 +0000 (0:00:00.656) 0:09:34.334 ****** 2026-01-10 14:42:32.145738 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.145744 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.145750 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.145757 | orchestrator | 2026-01-10 14:42:32.145763 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-10 14:42:32.145770 | orchestrator | Saturday 10 January 2026 14:40:33 +0000 (0:00:00.337) 0:09:34.672 ****** 2026-01-10 14:42:32.145775 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.145781 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.145788 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.145794 | orchestrator | 2026-01-10 14:42:32.145800 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-10 14:42:32.145806 | orchestrator | Saturday 10 January 2026 14:40:33 +0000 (0:00:00.346) 0:09:35.018 ****** 2026-01-10 14:42:32.145812 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.145818 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.145825 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.145831 | orchestrator | 2026-01-10 14:42:32.145837 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-10 14:42:32.145843 | orchestrator | Saturday 10 January 2026 14:40:34 +0000 (0:00:00.326) 0:09:35.345 ****** 2026-01-10 14:42:32.145849 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.145855 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.145861 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.145867 | orchestrator | 2026-01-10 14:42:32.145873 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-10 14:42:32.145879 | orchestrator | Saturday 10 January 2026 14:40:34 +0000 (0:00:00.603) 0:09:35.948 ****** 2026-01-10 14:42:32.145892 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.145898 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.145904 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.145910 | orchestrator | 2026-01-10 14:42:32.145916 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-10 14:42:32.145922 | orchestrator | Saturday 10 January 2026 14:40:35 +0000 (0:00:00.376) 0:09:36.325 ****** 2026-01-10 14:42:32.145929 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.145935 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.145941 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.145947 | orchestrator | 2026-01-10 14:42:32.145954 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-10 14:42:32.145960 | orchestrator | Saturday 10 January 2026 14:40:35 +0000 (0:00:00.360) 0:09:36.685 ****** 2026-01-10 14:42:32.145966 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.145972 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.145978 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.145984 | orchestrator | 2026-01-10 14:42:32.145990 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-10 14:42:32.145996 | orchestrator | Saturday 10 January 2026 14:40:35 +0000 (0:00:00.329) 0:09:37.015 ****** 2026-01-10 14:42:32.146002 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.146049 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.146057 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.146063 | orchestrator | 2026-01-10 14:42:32.146068 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-01-10 14:42:32.146074 | orchestrator | Saturday 10 January 2026 14:40:36 +0000 (0:00:00.865) 0:09:37.881 ****** 2026-01-10 14:42:32.146081 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.146087 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.146094 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-01-10 14:42:32.146100 | orchestrator | 2026-01-10 14:42:32.146107 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-01-10 14:42:32.146113 | orchestrator | Saturday 10 January 2026 14:40:37 +0000 (0:00:00.561) 0:09:38.442 ****** 2026-01-10 14:42:32.146119 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:42:32.146125 | orchestrator | 2026-01-10 14:42:32.146131 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-01-10 14:42:32.146136 | orchestrator | Saturday 10 January 2026 14:40:39 +0000 (0:00:02.250) 0:09:40.693 ****** 2026-01-10 14:42:32.146145 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-01-10 14:42:32.146153 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.146159 | orchestrator | 2026-01-10 14:42:32.146165 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-01-10 14:42:32.146171 | orchestrator | Saturday 10 January 2026 14:40:39 +0000 (0:00:00.204) 0:09:40.897 ****** 2026-01-10 14:42:32.146179 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-10 14:42:32.146191 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-10 14:42:32.146197 | orchestrator | 2026-01-10 14:42:32.146210 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-01-10 14:42:32.146216 | orchestrator | Saturday 10 January 2026 14:40:49 +0000 (0:00:09.333) 0:09:50.231 ****** 2026-01-10 14:42:32.146233 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:42:32.146239 | orchestrator | 2026-01-10 14:42:32.146245 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-01-10 14:42:32.146252 | orchestrator | Saturday 10 January 2026 14:40:53 +0000 (0:00:04.134) 0:09:54.365 ****** 2026-01-10 14:42:32.146257 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:42:32.146265 | orchestrator | 2026-01-10 14:42:32.146271 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-01-10 14:42:32.146277 | orchestrator | Saturday 10 January 2026 14:40:53 +0000 (0:00:00.588) 0:09:54.954 ****** 2026-01-10 14:42:32.146283 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-10 14:42:32.146289 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-10 14:42:32.146295 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-10 14:42:32.146301 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-01-10 14:42:32.146306 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-01-10 14:42:32.146313 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-01-10 14:42:32.146318 | orchestrator | 2026-01-10 14:42:32.146324 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-01-10 14:42:32.146329 | orchestrator | Saturday 10 January 2026 14:40:54 +0000 (0:00:01.039) 0:09:55.993 ****** 2026-01-10 14:42:32.146335 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:42:32.146341 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-10 14:42:32.146347 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-10 14:42:32.146353 | orchestrator | 2026-01-10 14:42:32.146359 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-01-10 14:42:32.146365 | orchestrator | Saturday 10 January 2026 14:40:57 +0000 (0:00:02.326) 0:09:58.320 ****** 2026-01-10 14:42:32.146371 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-10 14:42:32.146377 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-10 14:42:32.146382 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:42:32.146388 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-10 14:42:32.146393 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-10 14:42:32.146399 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:42:32.146405 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-10 14:42:32.146411 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-10 14:42:32.146417 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:42:32.146423 | orchestrator | 2026-01-10 14:42:32.146429 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-01-10 14:42:32.146434 | orchestrator | Saturday 10 January 2026 14:40:58 +0000 (0:00:01.374) 0:09:59.694 ****** 2026-01-10 14:42:32.146456 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:42:32.146466 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:42:32.146473 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:42:32.146479 | orchestrator | 2026-01-10 14:42:32.146488 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-01-10 14:42:32.146494 | orchestrator | Saturday 10 January 2026 14:41:01 +0000 (0:00:02.596) 0:10:02.291 ****** 2026-01-10 14:42:32.146500 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.146505 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.146512 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.146518 | orchestrator | 2026-01-10 14:42:32.146524 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-01-10 14:42:32.146529 | orchestrator | Saturday 10 January 2026 14:41:01 +0000 (0:00:00.348) 0:10:02.639 ****** 2026-01-10 14:42:32.146536 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:42:32.146547 | orchestrator | 2026-01-10 14:42:32.146554 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-01-10 14:42:32.146560 | orchestrator | Saturday 10 January 2026 14:41:02 +0000 (0:00:00.855) 0:10:03.495 ****** 2026-01-10 14:42:32.146565 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:42:32.146572 | orchestrator | 2026-01-10 14:42:32.146577 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-01-10 14:42:32.146583 | orchestrator | Saturday 10 January 2026 14:41:02 +0000 (0:00:00.568) 0:10:04.064 ****** 2026-01-10 14:42:32.146589 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:42:32.146595 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:42:32.146601 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:42:32.146607 | orchestrator | 2026-01-10 14:42:32.146612 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-01-10 14:42:32.146618 | orchestrator | Saturday 10 January 2026 14:41:04 +0000 (0:00:01.198) 0:10:05.262 ****** 2026-01-10 14:42:32.146624 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:42:32.146630 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:42:32.146635 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:42:32.146641 | orchestrator | 2026-01-10 14:42:32.146647 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-01-10 14:42:32.146652 | orchestrator | Saturday 10 January 2026 14:41:05 +0000 (0:00:01.496) 0:10:06.758 ****** 2026-01-10 14:42:32.146658 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:42:32.146664 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:42:32.146670 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:42:32.146676 | orchestrator | 2026-01-10 14:42:32.146682 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-01-10 14:42:32.146695 | orchestrator | Saturday 10 January 2026 14:41:07 +0000 (0:00:02.023) 0:10:08.782 ****** 2026-01-10 14:42:32.146701 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:42:32.146707 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:42:32.146713 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:42:32.146720 | orchestrator | 2026-01-10 14:42:32.146725 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-01-10 14:42:32.146731 | orchestrator | Saturday 10 January 2026 14:41:09 +0000 (0:00:02.035) 0:10:10.818 ****** 2026-01-10 14:42:32.146737 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.146743 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.146749 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.146754 | orchestrator | 2026-01-10 14:42:32.146760 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-10 14:42:32.146766 | orchestrator | Saturday 10 January 2026 14:41:11 +0000 (0:00:01.511) 0:10:12.329 ****** 2026-01-10 14:42:32.146772 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:42:32.146778 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:42:32.146784 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:42:32.146790 | orchestrator | 2026-01-10 14:42:32.146796 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-10 14:42:32.146802 | orchestrator | Saturday 10 January 2026 14:41:11 +0000 (0:00:00.686) 0:10:13.016 ****** 2026-01-10 14:42:32.146808 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:42:32.146814 | orchestrator | 2026-01-10 14:42:32.146820 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-10 14:42:32.146825 | orchestrator | Saturday 10 January 2026 14:41:12 +0000 (0:00:00.778) 0:10:13.794 ****** 2026-01-10 14:42:32.146831 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.146837 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.146843 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.146849 | orchestrator | 2026-01-10 14:42:32.146856 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-10 14:42:32.146867 | orchestrator | Saturday 10 January 2026 14:41:13 +0000 (0:00:00.354) 0:10:14.148 ****** 2026-01-10 14:42:32.146872 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:42:32.146878 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:42:32.146884 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:42:32.146890 | orchestrator | 2026-01-10 14:42:32.146895 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-10 14:42:32.146901 | orchestrator | Saturday 10 January 2026 14:41:14 +0000 (0:00:01.377) 0:10:15.526 ****** 2026-01-10 14:42:32.146907 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:42:32.146913 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:42:32.146918 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:42:32.146924 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.146930 | orchestrator | 2026-01-10 14:42:32.146936 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-10 14:42:32.146941 | orchestrator | Saturday 10 January 2026 14:41:15 +0000 (0:00:00.915) 0:10:16.442 ****** 2026-01-10 14:42:32.146947 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.146953 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.146960 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.146965 | orchestrator | 2026-01-10 14:42:32.146971 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-01-10 14:42:32.146977 | orchestrator | 2026-01-10 14:42:32.146987 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-10 14:42:32.146993 | orchestrator | Saturday 10 January 2026 14:41:16 +0000 (0:00:00.831) 0:10:17.273 ****** 2026-01-10 14:42:32.146998 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:42:32.147005 | orchestrator | 2026-01-10 14:42:32.147011 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-10 14:42:32.147016 | orchestrator | Saturday 10 January 2026 14:41:16 +0000 (0:00:00.526) 0:10:17.799 ****** 2026-01-10 14:42:32.147022 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:42:32.147028 | orchestrator | 2026-01-10 14:42:32.147033 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-10 14:42:32.147039 | orchestrator | Saturday 10 January 2026 14:41:17 +0000 (0:00:00.899) 0:10:18.699 ****** 2026-01-10 14:42:32.147045 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.147051 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.147056 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.147062 | orchestrator | 2026-01-10 14:42:32.147068 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-10 14:42:32.147074 | orchestrator | Saturday 10 January 2026 14:41:17 +0000 (0:00:00.316) 0:10:19.015 ****** 2026-01-10 14:42:32.147080 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.147086 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.147092 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.147098 | orchestrator | 2026-01-10 14:42:32.147104 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-10 14:42:32.147111 | orchestrator | Saturday 10 January 2026 14:41:18 +0000 (0:00:00.795) 0:10:19.810 ****** 2026-01-10 14:42:32.147117 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.147123 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.147129 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.147135 | orchestrator | 2026-01-10 14:42:32.147141 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-10 14:42:32.147146 | orchestrator | Saturday 10 January 2026 14:41:19 +0000 (0:00:01.065) 0:10:20.876 ****** 2026-01-10 14:42:32.147152 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.147158 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.147164 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.147175 | orchestrator | 2026-01-10 14:42:32.147181 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-10 14:42:32.147188 | orchestrator | Saturday 10 January 2026 14:41:20 +0000 (0:00:00.859) 0:10:21.736 ****** 2026-01-10 14:42:32.147194 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.147205 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.147212 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.147218 | orchestrator | 2026-01-10 14:42:32.147224 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-10 14:42:32.147230 | orchestrator | Saturday 10 January 2026 14:41:20 +0000 (0:00:00.320) 0:10:22.056 ****** 2026-01-10 14:42:32.147236 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.147242 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.147249 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.147254 | orchestrator | 2026-01-10 14:42:32.147260 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-10 14:42:32.147266 | orchestrator | Saturday 10 January 2026 14:41:21 +0000 (0:00:00.313) 0:10:22.370 ****** 2026-01-10 14:42:32.147273 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.147278 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.147285 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.147291 | orchestrator | 2026-01-10 14:42:32.147297 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-10 14:42:32.147303 | orchestrator | Saturday 10 January 2026 14:41:21 +0000 (0:00:00.587) 0:10:22.957 ****** 2026-01-10 14:42:32.147309 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.147315 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.147322 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.147328 | orchestrator | 2026-01-10 14:42:32.147334 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-10 14:42:32.147340 | orchestrator | Saturday 10 January 2026 14:41:22 +0000 (0:00:00.727) 0:10:23.684 ****** 2026-01-10 14:42:32.147346 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.147352 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.147358 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.147364 | orchestrator | 2026-01-10 14:42:32.147370 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-10 14:42:32.147377 | orchestrator | Saturday 10 January 2026 14:41:23 +0000 (0:00:00.704) 0:10:24.389 ****** 2026-01-10 14:42:32.147381 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.147387 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.147393 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.147399 | orchestrator | 2026-01-10 14:42:32.147405 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-10 14:42:32.147411 | orchestrator | Saturday 10 January 2026 14:41:23 +0000 (0:00:00.292) 0:10:24.681 ****** 2026-01-10 14:42:32.147416 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.147422 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.147428 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.147434 | orchestrator | 2026-01-10 14:42:32.147453 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-10 14:42:32.147460 | orchestrator | Saturday 10 January 2026 14:41:24 +0000 (0:00:00.587) 0:10:25.269 ****** 2026-01-10 14:42:32.147465 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.147471 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.147477 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.147483 | orchestrator | 2026-01-10 14:42:32.147489 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-10 14:42:32.147495 | orchestrator | Saturday 10 January 2026 14:41:24 +0000 (0:00:00.338) 0:10:25.608 ****** 2026-01-10 14:42:32.147501 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.147508 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.147514 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.147520 | orchestrator | 2026-01-10 14:42:32.147530 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-10 14:42:32.147542 | orchestrator | Saturday 10 January 2026 14:41:24 +0000 (0:00:00.374) 0:10:25.982 ****** 2026-01-10 14:42:32.147548 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.147554 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.147561 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.147567 | orchestrator | 2026-01-10 14:42:32.147573 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-10 14:42:32.147579 | orchestrator | Saturday 10 January 2026 14:41:25 +0000 (0:00:00.365) 0:10:26.348 ****** 2026-01-10 14:42:32.147585 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.147591 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.147598 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.147603 | orchestrator | 2026-01-10 14:42:32.147610 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-10 14:42:32.147616 | orchestrator | Saturday 10 January 2026 14:41:25 +0000 (0:00:00.610) 0:10:26.958 ****** 2026-01-10 14:42:32.147623 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.147629 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.147635 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.147641 | orchestrator | 2026-01-10 14:42:32.147647 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-10 14:42:32.147653 | orchestrator | Saturday 10 January 2026 14:41:26 +0000 (0:00:00.339) 0:10:27.297 ****** 2026-01-10 14:42:32.147659 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.147665 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.147671 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.147677 | orchestrator | 2026-01-10 14:42:32.147683 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-10 14:42:32.147704 | orchestrator | Saturday 10 January 2026 14:41:26 +0000 (0:00:00.321) 0:10:27.618 ****** 2026-01-10 14:42:32.147711 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.147724 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.147730 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.147737 | orchestrator | 2026-01-10 14:42:32.147743 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-10 14:42:32.147749 | orchestrator | Saturday 10 January 2026 14:41:26 +0000 (0:00:00.340) 0:10:27.959 ****** 2026-01-10 14:42:32.147755 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.147761 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.147766 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.147773 | orchestrator | 2026-01-10 14:42:32.147779 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-01-10 14:42:32.147785 | orchestrator | Saturday 10 January 2026 14:41:27 +0000 (0:00:00.852) 0:10:28.811 ****** 2026-01-10 14:42:32.147796 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:42:32.147802 | orchestrator | 2026-01-10 14:42:32.147807 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-10 14:42:32.147813 | orchestrator | Saturday 10 January 2026 14:41:28 +0000 (0:00:00.574) 0:10:29.386 ****** 2026-01-10 14:42:32.147820 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:42:32.147828 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-10 14:42:32.147833 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-10 14:42:32.147840 | orchestrator | 2026-01-10 14:42:32.147846 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-10 14:42:32.147852 | orchestrator | Saturday 10 January 2026 14:41:30 +0000 (0:00:02.304) 0:10:31.691 ****** 2026-01-10 14:42:32.147858 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-10 14:42:32.147865 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-10 14:42:32.147871 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:42:32.147877 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-10 14:42:32.147883 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-10 14:42:32.147894 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:42:32.147901 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-10 14:42:32.147907 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-10 14:42:32.147913 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:42:32.147919 | orchestrator | 2026-01-10 14:42:32.147925 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-01-10 14:42:32.147932 | orchestrator | Saturday 10 January 2026 14:41:32 +0000 (0:00:01.684) 0:10:33.375 ****** 2026-01-10 14:42:32.147938 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.147944 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.147950 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.147956 | orchestrator | 2026-01-10 14:42:32.147962 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-01-10 14:42:32.147968 | orchestrator | Saturday 10 January 2026 14:41:32 +0000 (0:00:00.343) 0:10:33.719 ****** 2026-01-10 14:42:32.147975 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:42:32.147981 | orchestrator | 2026-01-10 14:42:32.147987 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-01-10 14:42:32.147994 | orchestrator | Saturday 10 January 2026 14:41:33 +0000 (0:00:00.562) 0:10:34.281 ****** 2026-01-10 14:42:32.148000 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-10 14:42:32.148008 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-10 14:42:32.148014 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-10 14:42:32.148020 | orchestrator | 2026-01-10 14:42:32.148026 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-01-10 14:42:32.148032 | orchestrator | Saturday 10 January 2026 14:41:34 +0000 (0:00:01.389) 0:10:35.671 ****** 2026-01-10 14:42:32.148038 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:42:32.148045 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-10 14:42:32.148051 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:42:32.148057 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-10 14:42:32.148064 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:42:32.148069 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-10 14:42:32.148075 | orchestrator | 2026-01-10 14:42:32.148081 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-10 14:42:32.148087 | orchestrator | Saturday 10 January 2026 14:41:39 +0000 (0:00:05.036) 0:10:40.707 ****** 2026-01-10 14:42:32.148094 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:42:32.148099 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-10 14:42:32.148105 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:42:32.148111 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-10 14:42:32.148118 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:42:32.148124 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-10 14:42:32.148130 | orchestrator | 2026-01-10 14:42:32.148136 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-10 14:42:32.148146 | orchestrator | Saturday 10 January 2026 14:41:42 +0000 (0:00:03.356) 0:10:44.064 ****** 2026-01-10 14:42:32.148152 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-10 14:42:32.148158 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:42:32.148165 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-10 14:42:32.148171 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:42:32.148177 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-10 14:42:32.148183 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:42:32.148189 | orchestrator | 2026-01-10 14:42:32.148200 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-01-10 14:42:32.148206 | orchestrator | Saturday 10 January 2026 14:41:44 +0000 (0:00:01.329) 0:10:45.394 ****** 2026-01-10 14:42:32.148212 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-01-10 14:42:32.148218 | orchestrator | 2026-01-10 14:42:32.148224 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-01-10 14:42:32.148230 | orchestrator | Saturday 10 January 2026 14:41:44 +0000 (0:00:00.355) 0:10:45.749 ****** 2026-01-10 14:42:32.148237 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-10 14:42:32.148244 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-10 14:42:32.148250 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-10 14:42:32.148257 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-10 14:42:32.148263 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-10 14:42:32.148269 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.148275 | orchestrator | 2026-01-10 14:42:32.148282 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-01-10 14:42:32.148287 | orchestrator | Saturday 10 January 2026 14:41:46 +0000 (0:00:01.461) 0:10:47.211 ****** 2026-01-10 14:42:32.148479 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-10 14:42:32.148519 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-10 14:42:32.148526 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-10 14:42:32.148532 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-10 14:42:32.148538 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-10 14:42:32.148544 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.148550 | orchestrator | 2026-01-10 14:42:32.148557 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-01-10 14:42:32.148564 | orchestrator | Saturday 10 January 2026 14:41:46 +0000 (0:00:00.631) 0:10:47.843 ****** 2026-01-10 14:42:32.148574 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-10 14:42:32.148582 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-10 14:42:32.148587 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-10 14:42:32.148604 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-10 14:42:32.148614 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-10 14:42:32.148620 | orchestrator | 2026-01-10 14:42:32.148626 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-01-10 14:42:32.148632 | orchestrator | Saturday 10 January 2026 14:42:17 +0000 (0:00:30.250) 0:11:18.094 ****** 2026-01-10 14:42:32.148638 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.148644 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.148650 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.148656 | orchestrator | 2026-01-10 14:42:32.148662 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-01-10 14:42:32.148667 | orchestrator | Saturday 10 January 2026 14:42:17 +0000 (0:00:00.344) 0:11:18.439 ****** 2026-01-10 14:42:32.148673 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.148679 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.148685 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.148692 | orchestrator | 2026-01-10 14:42:32.148698 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-01-10 14:42:32.148703 | orchestrator | Saturday 10 January 2026 14:42:17 +0000 (0:00:00.331) 0:11:18.770 ****** 2026-01-10 14:42:32.148709 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:42:32.148715 | orchestrator | 2026-01-10 14:42:32.148722 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-01-10 14:42:32.148728 | orchestrator | Saturday 10 January 2026 14:42:18 +0000 (0:00:00.785) 0:11:19.555 ****** 2026-01-10 14:42:32.148744 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:42:32.148750 | orchestrator | 2026-01-10 14:42:32.148757 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-01-10 14:42:32.148762 | orchestrator | Saturday 10 January 2026 14:42:19 +0000 (0:00:00.604) 0:11:20.160 ****** 2026-01-10 14:42:32.148769 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:42:32.148775 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:42:32.148781 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:42:32.148786 | orchestrator | 2026-01-10 14:42:32.148793 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-01-10 14:42:32.148799 | orchestrator | Saturday 10 January 2026 14:42:20 +0000 (0:00:01.434) 0:11:21.594 ****** 2026-01-10 14:42:32.148805 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:42:32.148811 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:42:32.148817 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:42:32.148823 | orchestrator | 2026-01-10 14:42:32.148829 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-01-10 14:42:32.148835 | orchestrator | Saturday 10 January 2026 14:42:22 +0000 (0:00:01.563) 0:11:23.158 ****** 2026-01-10 14:42:32.148841 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:42:32.148847 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:42:32.148853 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:42:32.148859 | orchestrator | 2026-01-10 14:42:32.148865 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-01-10 14:42:32.148871 | orchestrator | Saturday 10 January 2026 14:42:24 +0000 (0:00:02.074) 0:11:25.232 ****** 2026-01-10 14:42:32.148877 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-10 14:42:32.148884 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-10 14:42:32.148890 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-10 14:42:32.148900 | orchestrator | 2026-01-10 14:42:32.148906 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-10 14:42:32.148912 | orchestrator | Saturday 10 January 2026 14:42:27 +0000 (0:00:03.020) 0:11:28.253 ****** 2026-01-10 14:42:32.148918 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.148924 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.148931 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.148937 | orchestrator | 2026-01-10 14:42:32.148942 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-10 14:42:32.148948 | orchestrator | Saturday 10 January 2026 14:42:27 +0000 (0:00:00.359) 0:11:28.612 ****** 2026-01-10 14:42:32.148955 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:42:32.148961 | orchestrator | 2026-01-10 14:42:32.148967 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-10 14:42:32.148974 | orchestrator | Saturday 10 January 2026 14:42:28 +0000 (0:00:00.537) 0:11:29.150 ****** 2026-01-10 14:42:32.148979 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.148985 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.148991 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.148998 | orchestrator | 2026-01-10 14:42:32.149006 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-10 14:42:32.149012 | orchestrator | Saturday 10 January 2026 14:42:28 +0000 (0:00:00.667) 0:11:29.817 ****** 2026-01-10 14:42:32.149018 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.149024 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:42:32.149030 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:42:32.149036 | orchestrator | 2026-01-10 14:42:32.149042 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-10 14:42:32.149049 | orchestrator | Saturday 10 January 2026 14:42:29 +0000 (0:00:00.351) 0:11:30.169 ****** 2026-01-10 14:42:32.149055 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:42:32.149061 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:42:32.149067 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:42:32.149073 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:42:32.149079 | orchestrator | 2026-01-10 14:42:32.149085 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-10 14:42:32.149091 | orchestrator | Saturday 10 January 2026 14:42:29 +0000 (0:00:00.647) 0:11:30.816 ****** 2026-01-10 14:42:32.149097 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:42:32.149104 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:42:32.149110 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:42:32.149116 | orchestrator | 2026-01-10 14:42:32.149122 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:42:32.149128 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-01-10 14:42:32.149135 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-01-10 14:42:32.149141 | orchestrator | testbed-node-2 : ok=134  changed=34  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-01-10 14:42:32.149147 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-01-10 14:42:32.149154 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-01-10 14:42:32.149164 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-01-10 14:42:32.149175 | orchestrator | 2026-01-10 14:42:32.149181 | orchestrator | 2026-01-10 14:42:32.149187 | orchestrator | 2026-01-10 14:42:32.149193 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:42:32.149199 | orchestrator | Saturday 10 January 2026 14:42:30 +0000 (0:00:00.266) 0:11:31.083 ****** 2026-01-10 14:42:32.149204 | orchestrator | =============================================================================== 2026-01-10 14:42:32.149210 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 49.26s 2026-01-10 14:42:32.149216 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 41.64s 2026-01-10 14:42:32.149221 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.54s 2026-01-10 14:42:32.149227 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.25s 2026-01-10 14:42:32.149232 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.01s 2026-01-10 14:42:32.149238 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.40s 2026-01-10 14:42:32.149244 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.57s 2026-01-10 14:42:32.149250 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 9.33s 2026-01-10 14:42:32.149256 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.26s 2026-01-10 14:42:32.149262 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 9.25s 2026-01-10 14:42:32.149268 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.96s 2026-01-10 14:42:32.149274 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.70s 2026-01-10 14:42:32.149280 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.40s 2026-01-10 14:42:32.149286 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 5.04s 2026-01-10 14:42:32.149291 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 4.54s 2026-01-10 14:42:32.149298 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 4.25s 2026-01-10 14:42:32.149304 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 4.13s 2026-01-10 14:42:32.149309 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 3.88s 2026-01-10 14:42:32.149315 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.78s 2026-01-10 14:42:32.149321 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.53s 2026-01-10 14:42:32.149327 | orchestrator | 2026-01-10 14:42:32 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:35.171999 | orchestrator | 2026-01-10 14:42:35 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:42:35.175533 | orchestrator | 2026-01-10 14:42:35 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:42:35.177767 | orchestrator | 2026-01-10 14:42:35 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:42:35.177841 | orchestrator | 2026-01-10 14:42:35 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:38.226608 | orchestrator | 2026-01-10 14:42:38 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:42:38.227699 | orchestrator | 2026-01-10 14:42:38 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:42:38.229958 | orchestrator | 2026-01-10 14:42:38 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:42:38.230004 | orchestrator | 2026-01-10 14:42:38 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:41.285004 | orchestrator | 2026-01-10 14:42:41 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:42:41.286692 | orchestrator | 2026-01-10 14:42:41 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:42:41.288678 | orchestrator | 2026-01-10 14:42:41 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:42:41.288711 | orchestrator | 2026-01-10 14:42:41 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:44.337981 | orchestrator | 2026-01-10 14:42:44 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:42:44.338992 | orchestrator | 2026-01-10 14:42:44 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:42:44.341762 | orchestrator | 2026-01-10 14:42:44 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:42:44.341804 | orchestrator | 2026-01-10 14:42:44 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:47.381416 | orchestrator | 2026-01-10 14:42:47 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:42:47.382648 | orchestrator | 2026-01-10 14:42:47 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:42:47.384262 | orchestrator | 2026-01-10 14:42:47 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:42:47.384306 | orchestrator | 2026-01-10 14:42:47 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:50.431124 | orchestrator | 2026-01-10 14:42:50 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:42:50.431829 | orchestrator | 2026-01-10 14:42:50 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:42:50.437341 | orchestrator | 2026-01-10 14:42:50 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:42:50.437411 | orchestrator | 2026-01-10 14:42:50 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:53.475293 | orchestrator | 2026-01-10 14:42:53 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:42:53.477153 | orchestrator | 2026-01-10 14:42:53 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:42:53.478516 | orchestrator | 2026-01-10 14:42:53 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:42:53.479164 | orchestrator | 2026-01-10 14:42:53 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:56.519545 | orchestrator | 2026-01-10 14:42:56 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:42:56.519634 | orchestrator | 2026-01-10 14:42:56 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:42:56.522121 | orchestrator | 2026-01-10 14:42:56 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state STARTED 2026-01-10 14:42:56.522205 | orchestrator | 2026-01-10 14:42:56 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:59.570200 | orchestrator | 2026-01-10 14:42:59 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:42:59.570706 | orchestrator | 2026-01-10 14:42:59 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:42:59.572119 | orchestrator | 2026-01-10 14:42:59.572224 | orchestrator | 2026-01-10 14:42:59 | INFO  | Task 4b654874-e1fe-4698-a51e-2c83346e7422 is in state SUCCESS 2026-01-10 14:42:59.572801 | orchestrator | 2026-01-10 14:42:59 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:59.574521 | orchestrator | 2026-01-10 14:42:59.574574 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:42:59.574587 | orchestrator | 2026-01-10 14:42:59.574616 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:42:59.574655 | orchestrator | Saturday 10 January 2026 14:40:22 +0000 (0:00:00.263) 0:00:00.263 ****** 2026-01-10 14:42:59.574667 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:59.574679 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:59.574706 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:59.574729 | orchestrator | 2026-01-10 14:42:59.574741 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:42:59.574751 | orchestrator | Saturday 10 January 2026 14:40:22 +0000 (0:00:00.298) 0:00:00.562 ****** 2026-01-10 14:42:59.574762 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-01-10 14:42:59.574774 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-01-10 14:42:59.574784 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-01-10 14:42:59.574795 | orchestrator | 2026-01-10 14:42:59.574806 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-01-10 14:42:59.574817 | orchestrator | 2026-01-10 14:42:59.574828 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-10 14:42:59.574838 | orchestrator | Saturday 10 January 2026 14:40:23 +0000 (0:00:00.447) 0:00:01.009 ****** 2026-01-10 14:42:59.574856 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:42:59.574874 | orchestrator | 2026-01-10 14:42:59.574900 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-01-10 14:42:59.574923 | orchestrator | Saturday 10 January 2026 14:40:23 +0000 (0:00:00.529) 0:00:01.539 ****** 2026-01-10 14:42:59.574940 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-10 14:42:59.574957 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-10 14:42:59.574974 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-10 14:42:59.574991 | orchestrator | 2026-01-10 14:42:59.575007 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-01-10 14:42:59.575023 | orchestrator | Saturday 10 January 2026 14:40:24 +0000 (0:00:00.760) 0:00:02.300 ****** 2026-01-10 14:42:59.575048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-10 14:42:59.575074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-10 14:42:59.575117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-10 14:42:59.575168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-10 14:42:59.575186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-10 14:42:59.575202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-10 14:42:59.575223 | orchestrator | 2026-01-10 14:42:59.575235 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-10 14:42:59.575248 | orchestrator | Saturday 10 January 2026 14:40:26 +0000 (0:00:01.948) 0:00:04.248 ****** 2026-01-10 14:42:59.575260 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:42:59.575272 | orchestrator | 2026-01-10 14:42:59.575285 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-01-10 14:42:59.575297 | orchestrator | Saturday 10 January 2026 14:40:27 +0000 (0:00:00.526) 0:00:04.774 ****** 2026-01-10 14:42:59.575325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-10 14:42:59.575338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-10 14:42:59.575350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-10 14:42:59.575362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-10 14:42:59.575394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-10 14:42:59.575408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-10 14:42:59.575457 | orchestrator | 2026-01-10 14:42:59.575470 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-01-10 14:42:59.575481 | orchestrator | Saturday 10 January 2026 14:40:29 +0000 (0:00:02.779) 0:00:07.554 ****** 2026-01-10 14:42:59.575492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-10 14:42:59.575504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-10 14:42:59.575538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-10 14:42:59.575551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-10 14:42:59.575563 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:59.575575 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:59.575586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-10 14:42:59.575598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-10 14:42:59.575616 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:59.575628 | orchestrator | 2026-01-10 14:42:59.575638 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-01-10 14:42:59.575649 | orchestrator | Saturday 10 January 2026 14:40:31 +0000 (0:00:01.410) 0:00:08.964 ****** 2026-01-10 14:42:59.575672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-10 14:42:59.575686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-10 14:42:59.575698 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:59.575709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-10 14:42:59.575722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-10 14:42:59.575741 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:59.575758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-10 14:42:59.575776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-10 14:42:59.575788 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:59.575799 | orchestrator | 2026-01-10 14:42:59.575810 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-01-10 14:42:59.575821 | orchestrator | Saturday 10 January 2026 14:40:32 +0000 (0:00:01.124) 0:00:10.089 ****** 2026-01-10 14:42:59.575833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-10 14:42:59.575857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-10 14:42:59.575869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-10 14:42:59.575893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-10 14:42:59.575906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-10 14:42:59.575919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-10 14:42:59.575939 | orchestrator | 2026-01-10 14:42:59.575950 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-01-10 14:42:59.575961 | orchestrator | Saturday 10 January 2026 14:40:35 +0000 (0:00:03.050) 0:00:13.139 ****** 2026-01-10 14:42:59.575972 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:42:59.575983 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:42:59.575994 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:42:59.576005 | orchestrator | 2026-01-10 14:42:59.576016 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-01-10 14:42:59.576026 | orchestrator | Saturday 10 January 2026 14:40:39 +0000 (0:00:03.726) 0:00:16.865 ****** 2026-01-10 14:42:59.576037 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:42:59.576048 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:42:59.576058 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:42:59.576069 | orchestrator | 2026-01-10 14:42:59.576080 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-01-10 14:42:59.576091 | orchestrator | Saturday 10 January 2026 14:40:41 +0000 (0:00:01.984) 0:00:18.850 ****** 2026-01-10 14:42:59.576117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-10 14:42:59.576130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-10 14:42:59.576142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-10 14:42:59.576166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-10 14:42:59.576191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-10 14:42:59.576204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-10 14:42:59.576217 | orchestrator | 2026-01-10 14:42:59.576228 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-10 14:42:59.576248 | orchestrator | Saturday 10 January 2026 14:40:43 +0000 (0:00:02.131) 0:00:20.982 ****** 2026-01-10 14:42:59.576259 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:59.576271 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:59.576288 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:59.576305 | orchestrator | 2026-01-10 14:42:59.576321 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-10 14:42:59.576340 | orchestrator | Saturday 10 January 2026 14:40:43 +0000 (0:00:00.296) 0:00:21.278 ****** 2026-01-10 14:42:59.576358 | orchestrator | 2026-01-10 14:42:59.576376 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-10 14:42:59.576393 | orchestrator | Saturday 10 January 2026 14:40:43 +0000 (0:00:00.067) 0:00:21.346 ****** 2026-01-10 14:42:59.576411 | orchestrator | 2026-01-10 14:42:59.576455 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-10 14:42:59.576475 | orchestrator | Saturday 10 January 2026 14:40:43 +0000 (0:00:00.067) 0:00:21.414 ****** 2026-01-10 14:42:59.576494 | orchestrator | 2026-01-10 14:42:59.576512 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-01-10 14:42:59.576531 | orchestrator | Saturday 10 January 2026 14:40:43 +0000 (0:00:00.065) 0:00:21.479 ****** 2026-01-10 14:42:59.576543 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:59.576554 | orchestrator | 2026-01-10 14:42:59.576565 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-01-10 14:42:59.576575 | orchestrator | Saturday 10 January 2026 14:40:44 +0000 (0:00:00.663) 0:00:22.143 ****** 2026-01-10 14:42:59.576586 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:59.576596 | orchestrator | 2026-01-10 14:42:59.576607 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-01-10 14:42:59.576617 | orchestrator | Saturday 10 January 2026 14:40:44 +0000 (0:00:00.221) 0:00:22.364 ****** 2026-01-10 14:42:59.576628 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:42:59.576639 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:42:59.576649 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:42:59.576660 | orchestrator | 2026-01-10 14:42:59.576671 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-01-10 14:42:59.576681 | orchestrator | Saturday 10 January 2026 14:41:37 +0000 (0:00:53.106) 0:01:15.471 ****** 2026-01-10 14:42:59.576692 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:42:59.576702 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:42:59.576719 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:42:59.576737 | orchestrator | 2026-01-10 14:42:59.576753 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-10 14:42:59.576773 | orchestrator | Saturday 10 January 2026 14:42:46 +0000 (0:01:08.816) 0:02:24.287 ****** 2026-01-10 14:42:59.576792 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:42:59.576810 | orchestrator | 2026-01-10 14:42:59.576828 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-01-10 14:42:59.576847 | orchestrator | Saturday 10 January 2026 14:42:47 +0000 (0:00:00.683) 0:02:24.970 ****** 2026-01-10 14:42:59.576866 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:59.576885 | orchestrator | 2026-01-10 14:42:59.576903 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-01-10 14:42:59.576921 | orchestrator | Saturday 10 January 2026 14:42:49 +0000 (0:00:02.626) 0:02:27.597 ****** 2026-01-10 14:42:59.576939 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:59.576959 | orchestrator | 2026-01-10 14:42:59.576978 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-01-10 14:42:59.576997 | orchestrator | Saturday 10 January 2026 14:42:52 +0000 (0:00:02.682) 0:02:30.280 ****** 2026-01-10 14:42:59.577015 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:42:59.577034 | orchestrator | 2026-01-10 14:42:59.577051 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-01-10 14:42:59.577083 | orchestrator | Saturday 10 January 2026 14:42:55 +0000 (0:00:03.201) 0:02:33.482 ****** 2026-01-10 14:42:59.577102 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:42:59.577116 | orchestrator | 2026-01-10 14:42:59.577145 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:42:59.577172 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-10 14:42:59.577193 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-10 14:42:59.577211 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-10 14:42:59.577229 | orchestrator | 2026-01-10 14:42:59.577247 | orchestrator | 2026-01-10 14:42:59.577265 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:42:59.577283 | orchestrator | Saturday 10 January 2026 14:42:58 +0000 (0:00:02.839) 0:02:36.321 ****** 2026-01-10 14:42:59.577300 | orchestrator | =============================================================================== 2026-01-10 14:42:59.577318 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 68.82s 2026-01-10 14:42:59.577336 | orchestrator | opensearch : Restart opensearch container ------------------------------ 53.11s 2026-01-10 14:42:59.577355 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.73s 2026-01-10 14:42:59.577372 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.20s 2026-01-10 14:42:59.577389 | orchestrator | opensearch : Copying over config.json files for services ---------------- 3.05s 2026-01-10 14:42:59.577406 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.84s 2026-01-10 14:42:59.577592 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.78s 2026-01-10 14:42:59.577616 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.68s 2026-01-10 14:42:59.577635 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.63s 2026-01-10 14:42:59.577653 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.13s 2026-01-10 14:42:59.577672 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.98s 2026-01-10 14:42:59.577691 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.95s 2026-01-10 14:42:59.577708 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.41s 2026-01-10 14:42:59.577726 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.12s 2026-01-10 14:42:59.577745 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.76s 2026-01-10 14:42:59.577764 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.68s 2026-01-10 14:42:59.577781 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 0.66s 2026-01-10 14:42:59.577798 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2026-01-10 14:42:59.577818 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2026-01-10 14:42:59.577837 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.45s 2026-01-10 14:43:02.624657 | orchestrator | 2026-01-10 14:43:02 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:43:02.625490 | orchestrator | 2026-01-10 14:43:02 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:43:02.625651 | orchestrator | 2026-01-10 14:43:02 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:05.677798 | orchestrator | 2026-01-10 14:43:05 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:43:05.678978 | orchestrator | 2026-01-10 14:43:05 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:43:05.679032 | orchestrator | 2026-01-10 14:43:05 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:08.718708 | orchestrator | 2026-01-10 14:43:08 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:43:08.721437 | orchestrator | 2026-01-10 14:43:08 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:43:08.721569 | orchestrator | 2026-01-10 14:43:08 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:11.758872 | orchestrator | 2026-01-10 14:43:11 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:43:11.760095 | orchestrator | 2026-01-10 14:43:11 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:43:11.760143 | orchestrator | 2026-01-10 14:43:11 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:14.798232 | orchestrator | 2026-01-10 14:43:14 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:43:14.798521 | orchestrator | 2026-01-10 14:43:14 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:43:14.798762 | orchestrator | 2026-01-10 14:43:14 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:17.841742 | orchestrator | 2026-01-10 14:43:17 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:43:17.843631 | orchestrator | 2026-01-10 14:43:17 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:43:17.843865 | orchestrator | 2026-01-10 14:43:17 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:20.887173 | orchestrator | 2026-01-10 14:43:20 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:43:20.888699 | orchestrator | 2026-01-10 14:43:20 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:43:20.888770 | orchestrator | 2026-01-10 14:43:20 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:23.934874 | orchestrator | 2026-01-10 14:43:23 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state STARTED 2026-01-10 14:43:23.936544 | orchestrator | 2026-01-10 14:43:23 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:43:23.937196 | orchestrator | 2026-01-10 14:43:23 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:26.986132 | orchestrator | 2026-01-10 14:43:26.986232 | orchestrator | 2026-01-10 14:43:26.986248 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-01-10 14:43:26.986261 | orchestrator | 2026-01-10 14:43:26.986272 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-01-10 14:43:26.986284 | orchestrator | Saturday 10 January 2026 14:40:22 +0000 (0:00:00.090) 0:00:00.090 ****** 2026-01-10 14:43:26.986295 | orchestrator | ok: [localhost] => { 2026-01-10 14:43:26.986308 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-01-10 14:43:26.986320 | orchestrator | } 2026-01-10 14:43:26.986331 | orchestrator | 2026-01-10 14:43:26.986342 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-01-10 14:43:26.986354 | orchestrator | Saturday 10 January 2026 14:40:22 +0000 (0:00:00.075) 0:00:00.166 ****** 2026-01-10 14:43:26.986365 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-01-10 14:43:26.986378 | orchestrator | ...ignoring 2026-01-10 14:43:26.986420 | orchestrator | 2026-01-10 14:43:26.986441 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-01-10 14:43:26.986506 | orchestrator | Saturday 10 January 2026 14:40:25 +0000 (0:00:02.884) 0:00:03.051 ****** 2026-01-10 14:43:26.986527 | orchestrator | skipping: [localhost] 2026-01-10 14:43:26.986544 | orchestrator | 2026-01-10 14:43:26.986561 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-01-10 14:43:26.986579 | orchestrator | Saturday 10 January 2026 14:40:25 +0000 (0:00:00.070) 0:00:03.121 ****** 2026-01-10 14:43:26.986596 | orchestrator | ok: [localhost] 2026-01-10 14:43:26.986615 | orchestrator | 2026-01-10 14:43:26.986633 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:43:26.986650 | orchestrator | 2026-01-10 14:43:26.986665 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:43:26.986681 | orchestrator | Saturday 10 January 2026 14:40:25 +0000 (0:00:00.188) 0:00:03.309 ****** 2026-01-10 14:43:26.986697 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:43:26.986715 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:43:26.986738 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:43:26.986764 | orchestrator | 2026-01-10 14:43:26.986787 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:43:26.986804 | orchestrator | Saturday 10 January 2026 14:40:26 +0000 (0:00:00.373) 0:00:03.682 ****** 2026-01-10 14:43:26.986822 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-01-10 14:43:26.986840 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-01-10 14:43:26.986858 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-01-10 14:43:26.986876 | orchestrator | 2026-01-10 14:43:26.986893 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-01-10 14:43:26.986910 | orchestrator | 2026-01-10 14:43:26.986929 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-01-10 14:43:26.986947 | orchestrator | Saturday 10 January 2026 14:40:26 +0000 (0:00:00.641) 0:00:04.323 ****** 2026-01-10 14:43:26.986966 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-10 14:43:26.986985 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-10 14:43:26.987003 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-10 14:43:26.987022 | orchestrator | 2026-01-10 14:43:26.987039 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-10 14:43:26.987056 | orchestrator | Saturday 10 January 2026 14:40:27 +0000 (0:00:00.552) 0:00:04.876 ****** 2026-01-10 14:43:26.987075 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:43:26.987094 | orchestrator | 2026-01-10 14:43:26.987110 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-01-10 14:43:26.987121 | orchestrator | Saturday 10 January 2026 14:40:27 +0000 (0:00:00.572) 0:00:05.449 ****** 2026-01-10 14:43:26.987180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-10 14:43:26.987215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-10 14:43:26.987234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-10 14:43:26.987254 | orchestrator | 2026-01-10 14:43:26.987273 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-01-10 14:43:26.987285 | orchestrator | Saturday 10 January 2026 14:40:31 +0000 (0:00:03.662) 0:00:09.111 ****** 2026-01-10 14:43:26.987296 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:26.987306 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:43:26.987317 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:26.987328 | orchestrator | 2026-01-10 14:43:26.987339 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-01-10 14:43:26.987350 | orchestrator | Saturday 10 January 2026 14:40:32 +0000 (0:00:00.962) 0:00:10.073 ****** 2026-01-10 14:43:26.987360 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:26.987371 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:26.987382 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:43:26.987472 | orchestrator | 2026-01-10 14:43:26.987486 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-01-10 14:43:26.987497 | orchestrator | Saturday 10 January 2026 14:40:34 +0000 (0:00:01.627) 0:00:11.701 ****** 2026-01-10 14:43:26.987509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-10 14:43:26.987538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-10 14:43:26.987562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-10 14:43:26.987574 | orchestrator | 2026-01-10 14:43:26.987585 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-01-10 14:43:26.987596 | orchestrator | Saturday 10 January 2026 14:40:38 +0000 (0:00:04.433) 0:00:16.134 ****** 2026-01-10 14:43:26.987607 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:26.987701 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:26.987713 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:43:26.987724 | orchestrator | 2026-01-10 14:43:26.987735 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-01-10 14:43:26.987745 | orchestrator | Saturday 10 January 2026 14:40:39 +0000 (0:00:01.122) 0:00:17.257 ****** 2026-01-10 14:43:26.987756 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:43:26.987766 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:43:26.987777 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:43:26.987788 | orchestrator | 2026-01-10 14:43:26.987798 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-10 14:43:26.987809 | orchestrator | Saturday 10 January 2026 14:40:43 +0000 (0:00:04.302) 0:00:21.559 ****** 2026-01-10 14:43:26.987821 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:43:26.987840 | orchestrator | 2026-01-10 14:43:26.987851 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-10 14:43:26.987861 | orchestrator | Saturday 10 January 2026 14:40:44 +0000 (0:00:00.549) 0:00:22.108 ****** 2026-01-10 14:43:26.987890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:43:26.987904 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:26.987916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:43:26.987933 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:26.987956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:43:26.987968 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:26.987977 | orchestrator | 2026-01-10 14:43:26.987987 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-10 14:43:26.987996 | orchestrator | Saturday 10 January 2026 14:40:48 +0000 (0:00:03.636) 0:00:25.745 ****** 2026-01-10 14:43:26.988007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:43:26.988023 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:26.988044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:43:26.988056 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:26.988066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:43:26.988077 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:26.988087 | orchestrator | 2026-01-10 14:43:26.988097 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-10 14:43:26.988113 | orchestrator | Saturday 10 January 2026 14:40:51 +0000 (0:00:03.052) 0:00:28.798 ****** 2026-01-10 14:43:26.988139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:43:26.988150 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:26.988166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:43:26.988187 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:26.988218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:43:26.988246 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:26.988263 | orchestrator | 2026-01-10 14:43:26.988278 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-01-10 14:43:26.988293 | orchestrator | Saturday 10 January 2026 14:40:54 +0000 (0:00:03.277) 0:00:32.075 ****** 2026-01-10 14:43:26.988318 | orchestrator | ch2026-01-10 14:43:26 | INFO  | Task f8ee9db1-a5aa-4411-bee5-bf63fa5e6a70 is in state SUCCESS 2026-01-10 14:43:26.988336 | orchestrator | anged: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-10 14:43:26.988370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-10 14:43:26.988431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-10 14:43:26.988452 | orchestrator | 2026-01-10 14:43:26.988469 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-01-10 14:43:26.988485 | orchestrator | Saturday 10 January 2026 14:40:57 +0000 (0:00:03.397) 0:00:35.473 ****** 2026-01-10 14:43:26.988501 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:43:26.988520 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:43:26.988530 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:43:26.988539 | orchestrator | 2026-01-10 14:43:26.988549 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-01-10 14:43:26.988559 | orchestrator | Saturday 10 January 2026 14:40:58 +0000 (0:00:00.819) 0:00:36.292 ****** 2026-01-10 14:43:26.988568 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:43:26.988578 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:43:26.988588 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:43:26.988597 | orchestrator | 2026-01-10 14:43:26.988607 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-01-10 14:43:26.988616 | orchestrator | Saturday 10 January 2026 14:40:59 +0000 (0:00:00.599) 0:00:36.892 ****** 2026-01-10 14:43:26.988625 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:43:26.988635 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:43:26.988644 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:43:26.988654 | orchestrator | 2026-01-10 14:43:26.988663 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-01-10 14:43:26.988673 | orchestrator | Saturday 10 January 2026 14:40:59 +0000 (0:00:00.347) 0:00:37.239 ****** 2026-01-10 14:43:26.988684 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-01-10 14:43:26.988694 | orchestrator | ...ignoring 2026-01-10 14:43:26.988704 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-01-10 14:43:26.988714 | orchestrator | ...ignoring 2026-01-10 14:43:26.988724 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-01-10 14:43:26.988733 | orchestrator | ...ignoring 2026-01-10 14:43:26.988743 | orchestrator | 2026-01-10 14:43:26.988752 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-01-10 14:43:26.988762 | orchestrator | Saturday 10 January 2026 14:41:10 +0000 (0:00:10.890) 0:00:48.130 ****** 2026-01-10 14:43:26.988771 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:43:26.988781 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:43:26.988795 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:43:26.988805 | orchestrator | 2026-01-10 14:43:26.988815 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-01-10 14:43:26.988824 | orchestrator | Saturday 10 January 2026 14:41:10 +0000 (0:00:00.421) 0:00:48.551 ****** 2026-01-10 14:43:26.988834 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:26.988843 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:26.988853 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:26.988862 | orchestrator | 2026-01-10 14:43:26.988872 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-01-10 14:43:26.988882 | orchestrator | Saturday 10 January 2026 14:41:11 +0000 (0:00:00.688) 0:00:49.240 ****** 2026-01-10 14:43:26.988891 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:26.988900 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:26.988910 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:26.988919 | orchestrator | 2026-01-10 14:43:26.988929 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-01-10 14:43:26.988938 | orchestrator | Saturday 10 January 2026 14:41:12 +0000 (0:00:00.467) 0:00:49.708 ****** 2026-01-10 14:43:26.988948 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:26.988958 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:26.988975 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:26.988985 | orchestrator | 2026-01-10 14:43:26.988995 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-01-10 14:43:26.989005 | orchestrator | Saturday 10 January 2026 14:41:12 +0000 (0:00:00.447) 0:00:50.155 ****** 2026-01-10 14:43:26.989014 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:43:26.989024 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:43:26.989040 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:43:26.989049 | orchestrator | 2026-01-10 14:43:26.989059 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-01-10 14:43:26.989068 | orchestrator | Saturday 10 January 2026 14:41:13 +0000 (0:00:00.547) 0:00:50.702 ****** 2026-01-10 14:43:26.989078 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:26.989088 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:26.989097 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:26.989107 | orchestrator | 2026-01-10 14:43:26.989116 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-10 14:43:26.989126 | orchestrator | Saturday 10 January 2026 14:41:13 +0000 (0:00:00.718) 0:00:51.420 ****** 2026-01-10 14:43:26.989135 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:26.989145 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:26.989154 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-01-10 14:43:26.989169 | orchestrator | 2026-01-10 14:43:26.989185 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-01-10 14:43:26.989206 | orchestrator | Saturday 10 January 2026 14:41:14 +0000 (0:00:00.399) 0:00:51.819 ****** 2026-01-10 14:43:26.989228 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:43:26.989244 | orchestrator | 2026-01-10 14:43:26.989260 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-01-10 14:43:26.989277 | orchestrator | Saturday 10 January 2026 14:41:24 +0000 (0:00:10.121) 0:01:01.941 ****** 2026-01-10 14:43:26.989293 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:43:26.989310 | orchestrator | 2026-01-10 14:43:26.989326 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-10 14:43:26.989358 | orchestrator | Saturday 10 January 2026 14:41:24 +0000 (0:00:00.122) 0:01:02.064 ****** 2026-01-10 14:43:26.989379 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:26.989407 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:26.989418 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:26.989427 | orchestrator | 2026-01-10 14:43:26.989437 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-01-10 14:43:26.989447 | orchestrator | Saturday 10 January 2026 14:41:25 +0000 (0:00:00.971) 0:01:03.035 ****** 2026-01-10 14:43:26.989456 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:43:26.989466 | orchestrator | 2026-01-10 14:43:26.989475 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-01-10 14:43:26.989485 | orchestrator | Saturday 10 January 2026 14:41:33 +0000 (0:00:07.907) 0:01:10.942 ****** 2026-01-10 14:43:26.989495 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:43:26.989504 | orchestrator | 2026-01-10 14:43:26.989514 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-01-10 14:43:26.989523 | orchestrator | Saturday 10 January 2026 14:41:34 +0000 (0:00:01.602) 0:01:12.545 ****** 2026-01-10 14:43:26.989533 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:43:26.989543 | orchestrator | 2026-01-10 14:43:26.989552 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-01-10 14:43:26.989562 | orchestrator | Saturday 10 January 2026 14:41:37 +0000 (0:00:02.571) 0:01:15.116 ****** 2026-01-10 14:43:26.989572 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:43:26.989581 | orchestrator | 2026-01-10 14:43:26.989591 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-01-10 14:43:26.989601 | orchestrator | Saturday 10 January 2026 14:41:37 +0000 (0:00:00.141) 0:01:15.258 ****** 2026-01-10 14:43:26.989610 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:26.989620 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:26.989629 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:26.989639 | orchestrator | 2026-01-10 14:43:26.989649 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-01-10 14:43:26.989658 | orchestrator | Saturday 10 January 2026 14:41:38 +0000 (0:00:00.317) 0:01:15.575 ****** 2026-01-10 14:43:26.989677 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:26.989687 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-01-10 14:43:26.989696 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:43:26.989706 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:43:26.989715 | orchestrator | 2026-01-10 14:43:26.989725 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-01-10 14:43:26.989735 | orchestrator | skipping: no hosts matched 2026-01-10 14:43:26.989744 | orchestrator | 2026-01-10 14:43:26.989754 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-10 14:43:26.989771 | orchestrator | 2026-01-10 14:43:26.989795 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-10 14:43:26.989812 | orchestrator | Saturday 10 January 2026 14:41:38 +0000 (0:00:00.735) 0:01:16.311 ****** 2026-01-10 14:43:26.989830 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:43:26.989847 | orchestrator | 2026-01-10 14:43:26.989864 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-10 14:43:26.989881 | orchestrator | Saturday 10 January 2026 14:41:56 +0000 (0:00:17.997) 0:01:34.309 ****** 2026-01-10 14:43:26.989898 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:43:26.989915 | orchestrator | 2026-01-10 14:43:26.989934 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-10 14:43:26.989944 | orchestrator | Saturday 10 January 2026 14:42:12 +0000 (0:00:15.691) 0:01:50.000 ****** 2026-01-10 14:43:26.989954 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:43:26.989964 | orchestrator | 2026-01-10 14:43:26.989973 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-10 14:43:26.989982 | orchestrator | 2026-01-10 14:43:26.989992 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-10 14:43:26.990002 | orchestrator | Saturday 10 January 2026 14:42:14 +0000 (0:00:02.470) 0:01:52.470 ****** 2026-01-10 14:43:26.990051 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:43:26.990064 | orchestrator | 2026-01-10 14:43:26.990083 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-10 14:43:26.990093 | orchestrator | Saturday 10 January 2026 14:42:33 +0000 (0:00:18.324) 0:02:10.795 ****** 2026-01-10 14:43:26.990103 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:43:26.990112 | orchestrator | 2026-01-10 14:43:26.990121 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-10 14:43:26.990131 | orchestrator | Saturday 10 January 2026 14:42:48 +0000 (0:00:15.541) 0:02:26.336 ****** 2026-01-10 14:43:26.990140 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:43:26.990150 | orchestrator | 2026-01-10 14:43:26.990159 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-01-10 14:43:26.990169 | orchestrator | 2026-01-10 14:43:26.990178 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-10 14:43:26.990188 | orchestrator | Saturday 10 January 2026 14:42:51 +0000 (0:00:02.728) 0:02:29.065 ****** 2026-01-10 14:43:26.990197 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:43:26.990207 | orchestrator | 2026-01-10 14:43:26.990216 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-10 14:43:26.990226 | orchestrator | Saturday 10 January 2026 14:43:03 +0000 (0:00:12.289) 0:02:41.355 ****** 2026-01-10 14:43:26.990235 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:43:26.990245 | orchestrator | 2026-01-10 14:43:26.990254 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-10 14:43:26.990264 | orchestrator | Saturday 10 January 2026 14:43:09 +0000 (0:00:05.651) 0:02:47.006 ****** 2026-01-10 14:43:26.990273 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:43:26.990283 | orchestrator | 2026-01-10 14:43:26.990292 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-01-10 14:43:26.990302 | orchestrator | 2026-01-10 14:43:26.990311 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-01-10 14:43:26.990321 | orchestrator | Saturday 10 January 2026 14:43:12 +0000 (0:00:02.872) 0:02:49.879 ****** 2026-01-10 14:43:26.990339 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:43:26.990349 | orchestrator | 2026-01-10 14:43:26.990358 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-01-10 14:43:26.990368 | orchestrator | Saturday 10 January 2026 14:43:12 +0000 (0:00:00.556) 0:02:50.436 ****** 2026-01-10 14:43:26.990377 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:26.990387 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:26.990416 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:43:26.990426 | orchestrator | 2026-01-10 14:43:26.990436 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-01-10 14:43:26.990445 | orchestrator | Saturday 10 January 2026 14:43:15 +0000 (0:00:02.209) 0:02:52.646 ****** 2026-01-10 14:43:26.990455 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:26.990464 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:26.990474 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:43:26.990484 | orchestrator | 2026-01-10 14:43:26.990493 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-01-10 14:43:26.990503 | orchestrator | Saturday 10 January 2026 14:43:17 +0000 (0:00:02.318) 0:02:54.964 ****** 2026-01-10 14:43:26.990512 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:26.990522 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:26.990532 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:43:26.990541 | orchestrator | 2026-01-10 14:43:26.990551 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-01-10 14:43:26.990560 | orchestrator | Saturday 10 January 2026 14:43:19 +0000 (0:00:02.602) 0:02:57.567 ****** 2026-01-10 14:43:26.990570 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:26.990579 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:26.990589 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:43:26.990598 | orchestrator | 2026-01-10 14:43:26.990608 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-01-10 14:43:26.990617 | orchestrator | Saturday 10 January 2026 14:43:22 +0000 (0:00:02.583) 0:03:00.150 ****** 2026-01-10 14:43:26.990627 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:43:26.990637 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:43:26.990646 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:43:26.990656 | orchestrator | 2026-01-10 14:43:26.990665 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-01-10 14:43:26.990675 | orchestrator | Saturday 10 January 2026 14:43:25 +0000 (0:00:03.187) 0:03:03.338 ****** 2026-01-10 14:43:26.990684 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:26.990694 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:26.990704 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:26.990713 | orchestrator | 2026-01-10 14:43:26.990723 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:43:26.990733 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-01-10 14:43:26.990749 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-01-10 14:43:26.990761 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-01-10 14:43:26.990771 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-01-10 14:43:26.990781 | orchestrator | 2026-01-10 14:43:26.990790 | orchestrator | 2026-01-10 14:43:26.990861 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:43:26.990873 | orchestrator | Saturday 10 January 2026 14:43:26 +0000 (0:00:00.230) 0:03:03.569 ****** 2026-01-10 14:43:26.990883 | orchestrator | =============================================================================== 2026-01-10 14:43:26.990900 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 36.32s 2026-01-10 14:43:26.990918 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 31.23s 2026-01-10 14:43:26.990928 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.29s 2026-01-10 14:43:26.990938 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.89s 2026-01-10 14:43:26.990947 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.12s 2026-01-10 14:43:26.990957 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.91s 2026-01-10 14:43:26.990966 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 5.65s 2026-01-10 14:43:26.990976 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.20s 2026-01-10 14:43:26.990985 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.43s 2026-01-10 14:43:26.990995 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.30s 2026-01-10 14:43:26.991004 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.66s 2026-01-10 14:43:26.991014 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.64s 2026-01-10 14:43:26.991024 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.40s 2026-01-10 14:43:26.991033 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.28s 2026-01-10 14:43:26.991043 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.19s 2026-01-10 14:43:26.991052 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.05s 2026-01-10 14:43:26.991062 | orchestrator | Check MariaDB service --------------------------------------------------- 2.88s 2026-01-10 14:43:26.991071 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.87s 2026-01-10 14:43:26.991081 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.60s 2026-01-10 14:43:26.991090 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.58s 2026-01-10 14:43:26.991100 | orchestrator | 2026-01-10 14:43:26 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:43:26.991110 | orchestrator | 2026-01-10 14:43:26 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:30.041362 | orchestrator | 2026-01-10 14:43:30 | INFO  | Task c6504c42-a5f0-4a43-90e4-7aa777c16b28 is in state STARTED 2026-01-10 14:43:30.044616 | orchestrator | 2026-01-10 14:43:30 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:43:30.047611 | orchestrator | 2026-01-10 14:43:30 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:43:30.047749 | orchestrator | 2026-01-10 14:43:30 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:33.099919 | orchestrator | 2026-01-10 14:43:33 | INFO  | Task c6504c42-a5f0-4a43-90e4-7aa777c16b28 is in state STARTED 2026-01-10 14:43:33.100104 | orchestrator | 2026-01-10 14:43:33 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:43:33.102777 | orchestrator | 2026-01-10 14:43:33 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:43:33.102849 | orchestrator | 2026-01-10 14:43:33 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:36.143590 | orchestrator | 2026-01-10 14:43:36 | INFO  | Task c6504c42-a5f0-4a43-90e4-7aa777c16b28 is in state STARTED 2026-01-10 14:43:36.145709 | orchestrator | 2026-01-10 14:43:36 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:43:36.146749 | orchestrator | 2026-01-10 14:43:36 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:43:36.146837 | orchestrator | 2026-01-10 14:43:36 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:39.201837 | orchestrator | 2026-01-10 14:43:39 | INFO  | Task c6504c42-a5f0-4a43-90e4-7aa777c16b28 is in state STARTED 2026-01-10 14:43:39.203715 | orchestrator | 2026-01-10 14:43:39 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:43:39.206829 | orchestrator | 2026-01-10 14:43:39 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:43:39.206893 | orchestrator | 2026-01-10 14:43:39 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:42.231606 | orchestrator | 2026-01-10 14:43:42 | INFO  | Task c6504c42-a5f0-4a43-90e4-7aa777c16b28 is in state STARTED 2026-01-10 14:43:42.232037 | orchestrator | 2026-01-10 14:43:42 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:43:42.232943 | orchestrator | 2026-01-10 14:43:42 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:43:42.232982 | orchestrator | 2026-01-10 14:43:42 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:45.274341 | orchestrator | 2026-01-10 14:43:45 | INFO  | Task c6504c42-a5f0-4a43-90e4-7aa777c16b28 is in state STARTED 2026-01-10 14:43:45.276976 | orchestrator | 2026-01-10 14:43:45 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:43:45.278730 | orchestrator | 2026-01-10 14:43:45 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:43:45.279219 | orchestrator | 2026-01-10 14:43:45 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:48.324527 | orchestrator | 2026-01-10 14:43:48 | INFO  | Task c6504c42-a5f0-4a43-90e4-7aa777c16b28 is in state STARTED 2026-01-10 14:43:48.325449 | orchestrator | 2026-01-10 14:43:48 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:43:48.325485 | orchestrator | 2026-01-10 14:43:48 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:43:48.325491 | orchestrator | 2026-01-10 14:43:48 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:51.367172 | orchestrator | 2026-01-10 14:43:51 | INFO  | Task c6504c42-a5f0-4a43-90e4-7aa777c16b28 is in state STARTED 2026-01-10 14:43:51.368528 | orchestrator | 2026-01-10 14:43:51 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:43:51.372146 | orchestrator | 2026-01-10 14:43:51 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:43:51.372226 | orchestrator | 2026-01-10 14:43:51 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:54.410476 | orchestrator | 2026-01-10 14:43:54 | INFO  | Task c6504c42-a5f0-4a43-90e4-7aa777c16b28 is in state STARTED 2026-01-10 14:43:54.412293 | orchestrator | 2026-01-10 14:43:54 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:43:54.413602 | orchestrator | 2026-01-10 14:43:54 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:43:54.413641 | orchestrator | 2026-01-10 14:43:54 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:57.460846 | orchestrator | 2026-01-10 14:43:57 | INFO  | Task c6504c42-a5f0-4a43-90e4-7aa777c16b28 is in state STARTED 2026-01-10 14:43:57.462211 | orchestrator | 2026-01-10 14:43:57 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:43:57.463566 | orchestrator | 2026-01-10 14:43:57 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:43:57.463606 | orchestrator | 2026-01-10 14:43:57 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:00.500175 | orchestrator | 2026-01-10 14:44:00 | INFO  | Task c6504c42-a5f0-4a43-90e4-7aa777c16b28 is in state STARTED 2026-01-10 14:44:00.501695 | orchestrator | 2026-01-10 14:44:00 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:44:00.501925 | orchestrator | 2026-01-10 14:44:00 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:44:00.502678 | orchestrator | 2026-01-10 14:44:00 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:03.545742 | orchestrator | 2026-01-10 14:44:03 | INFO  | Task c6504c42-a5f0-4a43-90e4-7aa777c16b28 is in state STARTED 2026-01-10 14:44:03.547634 | orchestrator | 2026-01-10 14:44:03 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:44:03.549291 | orchestrator | 2026-01-10 14:44:03 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:44:03.549343 | orchestrator | 2026-01-10 14:44:03 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:06.586201 | orchestrator | 2026-01-10 14:44:06 | INFO  | Task c6504c42-a5f0-4a43-90e4-7aa777c16b28 is in state STARTED 2026-01-10 14:44:06.587111 | orchestrator | 2026-01-10 14:44:06 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:44:06.587850 | orchestrator | 2026-01-10 14:44:06 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:44:06.587886 | orchestrator | 2026-01-10 14:44:06 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:09.633382 | orchestrator | 2026-01-10 14:44:09 | INFO  | Task c6504c42-a5f0-4a43-90e4-7aa777c16b28 is in state STARTED 2026-01-10 14:44:09.636195 | orchestrator | 2026-01-10 14:44:09 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:44:09.637785 | orchestrator | 2026-01-10 14:44:09 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:44:09.637810 | orchestrator | 2026-01-10 14:44:09 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:12.676274 | orchestrator | 2026-01-10 14:44:12 | INFO  | Task c6504c42-a5f0-4a43-90e4-7aa777c16b28 is in state STARTED 2026-01-10 14:44:12.678503 | orchestrator | 2026-01-10 14:44:12 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:44:12.680071 | orchestrator | 2026-01-10 14:44:12 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:44:12.680119 | orchestrator | 2026-01-10 14:44:12 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:15.728171 | orchestrator | 2026-01-10 14:44:15 | INFO  | Task c6504c42-a5f0-4a43-90e4-7aa777c16b28 is in state STARTED 2026-01-10 14:44:15.731395 | orchestrator | 2026-01-10 14:44:15 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:44:15.733823 | orchestrator | 2026-01-10 14:44:15 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:44:15.733894 | orchestrator | 2026-01-10 14:44:15 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:18.777409 | orchestrator | 2026-01-10 14:44:18 | INFO  | Task c6504c42-a5f0-4a43-90e4-7aa777c16b28 is in state STARTED 2026-01-10 14:44:18.780140 | orchestrator | 2026-01-10 14:44:18 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:44:18.783494 | orchestrator | 2026-01-10 14:44:18 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:44:18.783998 | orchestrator | 2026-01-10 14:44:18 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:21.820322 | orchestrator | 2026-01-10 14:44:21 | INFO  | Task c6504c42-a5f0-4a43-90e4-7aa777c16b28 is in state STARTED 2026-01-10 14:44:21.822177 | orchestrator | 2026-01-10 14:44:21 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:44:21.824177 | orchestrator | 2026-01-10 14:44:21 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:44:21.824228 | orchestrator | 2026-01-10 14:44:21 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:24.873176 | orchestrator | 2026-01-10 14:44:24 | INFO  | Task c6504c42-a5f0-4a43-90e4-7aa777c16b28 is in state STARTED 2026-01-10 14:44:24.874616 | orchestrator | 2026-01-10 14:44:24 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:44:24.876240 | orchestrator | 2026-01-10 14:44:24 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:44:24.876304 | orchestrator | 2026-01-10 14:44:24 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:27.917227 | orchestrator | 2026-01-10 14:44:27 | INFO  | Task c6504c42-a5f0-4a43-90e4-7aa777c16b28 is in state STARTED 2026-01-10 14:44:27.921872 | orchestrator | 2026-01-10 14:44:27 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:44:27.924182 | orchestrator | 2026-01-10 14:44:27 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:44:27.924461 | orchestrator | 2026-01-10 14:44:27 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:30.976012 | orchestrator | 2026-01-10 14:44:30 | INFO  | Task c6504c42-a5f0-4a43-90e4-7aa777c16b28 is in state STARTED 2026-01-10 14:44:30.977133 | orchestrator | 2026-01-10 14:44:30 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:44:30.978396 | orchestrator | 2026-01-10 14:44:30 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:44:30.978433 | orchestrator | 2026-01-10 14:44:30 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:34.020025 | orchestrator | 2026-01-10 14:44:34 | INFO  | Task c6504c42-a5f0-4a43-90e4-7aa777c16b28 is in state STARTED 2026-01-10 14:44:34.021027 | orchestrator | 2026-01-10 14:44:34 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:44:34.023114 | orchestrator | 2026-01-10 14:44:34 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:44:34.023167 | orchestrator | 2026-01-10 14:44:34 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:37.076091 | orchestrator | 2026-01-10 14:44:37 | INFO  | Task c6504c42-a5f0-4a43-90e4-7aa777c16b28 is in state STARTED 2026-01-10 14:44:37.077266 | orchestrator | 2026-01-10 14:44:37 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:44:37.079075 | orchestrator | 2026-01-10 14:44:37 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:44:37.079131 | orchestrator | 2026-01-10 14:44:37 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:40.124291 | orchestrator | 2026-01-10 14:44:40 | INFO  | Task c6504c42-a5f0-4a43-90e4-7aa777c16b28 is in state STARTED 2026-01-10 14:44:40.127266 | orchestrator | 2026-01-10 14:44:40 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state STARTED 2026-01-10 14:44:40.128399 | orchestrator | 2026-01-10 14:44:40 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:44:40.128697 | orchestrator | 2026-01-10 14:44:40 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:43.171742 | orchestrator | 2026-01-10 14:44:43 | INFO  | Task c6504c42-a5f0-4a43-90e4-7aa777c16b28 is in state STARTED 2026-01-10 14:44:43.175548 | orchestrator | 2026-01-10 14:44:43 | INFO  | Task a743dc27-9cca-41d3-9c4c-3d0ae7b23704 is in state SUCCESS 2026-01-10 14:44:43.176721 | orchestrator | 2026-01-10 14:44:43.176790 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-10 14:44:43.176799 | orchestrator | 2.16.14 2026-01-10 14:44:43.176848 | orchestrator | 2026-01-10 14:44:43.176854 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-01-10 14:44:43.176860 | orchestrator | 2026-01-10 14:44:43.176866 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-10 14:44:43.176872 | orchestrator | Saturday 10 January 2026 14:42:35 +0000 (0:00:00.633) 0:00:00.633 ****** 2026-01-10 14:44:43.176878 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:44:43.176885 | orchestrator | 2026-01-10 14:44:43.176890 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-10 14:44:43.176896 | orchestrator | Saturday 10 January 2026 14:42:35 +0000 (0:00:00.638) 0:00:01.272 ****** 2026-01-10 14:44:43.176901 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:44:43.176907 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:44:43.176913 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:44:43.176918 | orchestrator | 2026-01-10 14:44:43.176972 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-10 14:44:43.176979 | orchestrator | Saturday 10 January 2026 14:42:36 +0000 (0:00:00.672) 0:00:01.944 ****** 2026-01-10 14:44:43.176984 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:44:43.176989 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:44:43.176995 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:44:43.177000 | orchestrator | 2026-01-10 14:44:43.177006 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-10 14:44:43.177012 | orchestrator | Saturday 10 January 2026 14:42:36 +0000 (0:00:00.326) 0:00:02.271 ****** 2026-01-10 14:44:43.177018 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:44:43.177023 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:44:43.177029 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:44:43.177034 | orchestrator | 2026-01-10 14:44:43.177040 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-10 14:44:43.177045 | orchestrator | Saturday 10 January 2026 14:42:37 +0000 (0:00:00.890) 0:00:03.162 ****** 2026-01-10 14:44:43.177091 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:44:43.177595 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:44:43.177646 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:44:43.177654 | orchestrator | 2026-01-10 14:44:43.177660 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-10 14:44:43.177666 | orchestrator | Saturday 10 January 2026 14:42:38 +0000 (0:00:00.313) 0:00:03.475 ****** 2026-01-10 14:44:43.177672 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:44:43.177678 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:44:43.177684 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:44:43.177689 | orchestrator | 2026-01-10 14:44:43.177695 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-10 14:44:43.177730 | orchestrator | Saturday 10 January 2026 14:42:38 +0000 (0:00:00.318) 0:00:03.794 ****** 2026-01-10 14:44:43.177735 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:44:43.177741 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:44:43.177746 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:44:43.177752 | orchestrator | 2026-01-10 14:44:43.177758 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-10 14:44:43.177764 | orchestrator | Saturday 10 January 2026 14:42:38 +0000 (0:00:00.335) 0:00:04.129 ****** 2026-01-10 14:44:43.177770 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:44:43.177776 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:44:43.177781 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:44:43.177787 | orchestrator | 2026-01-10 14:44:43.177792 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-10 14:44:43.177811 | orchestrator | Saturday 10 January 2026 14:42:39 +0000 (0:00:00.526) 0:00:04.655 ****** 2026-01-10 14:44:43.177817 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:44:43.177823 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:44:43.177828 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:44:43.177842 | orchestrator | 2026-01-10 14:44:43.177849 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-10 14:44:43.177859 | orchestrator | Saturday 10 January 2026 14:42:39 +0000 (0:00:00.307) 0:00:04.963 ****** 2026-01-10 14:44:43.177865 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-10 14:44:43.177871 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-10 14:44:43.177876 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-10 14:44:43.177882 | orchestrator | 2026-01-10 14:44:43.177888 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-10 14:44:43.177894 | orchestrator | Saturday 10 January 2026 14:42:40 +0000 (0:00:00.673) 0:00:05.637 ****** 2026-01-10 14:44:43.177899 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:44:43.177905 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:44:43.177911 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:44:43.177917 | orchestrator | 2026-01-10 14:44:43.177922 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-10 14:44:43.177928 | orchestrator | Saturday 10 January 2026 14:42:40 +0000 (0:00:00.436) 0:00:06.073 ****** 2026-01-10 14:44:43.177934 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-10 14:44:43.177939 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-10 14:44:43.177945 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-10 14:44:43.177951 | orchestrator | 2026-01-10 14:44:43.177957 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-10 14:44:43.177963 | orchestrator | Saturday 10 January 2026 14:42:43 +0000 (0:00:02.281) 0:00:08.354 ****** 2026-01-10 14:44:43.177969 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-10 14:44:43.177975 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-10 14:44:43.177981 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-10 14:44:43.177986 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:44:43.177992 | orchestrator | 2026-01-10 14:44:43.178048 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-10 14:44:43.178055 | orchestrator | Saturday 10 January 2026 14:42:43 +0000 (0:00:00.671) 0:00:09.026 ****** 2026-01-10 14:44:43.178062 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.178070 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.178075 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.178081 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:44:43.178087 | orchestrator | 2026-01-10 14:44:43.178092 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-10 14:44:43.178098 | orchestrator | Saturday 10 January 2026 14:42:44 +0000 (0:00:00.818) 0:00:09.845 ****** 2026-01-10 14:44:43.178105 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.178116 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.178122 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.178128 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:44:43.178133 | orchestrator | 2026-01-10 14:44:43.178139 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-10 14:44:43.178145 | orchestrator | Saturday 10 January 2026 14:42:44 +0000 (0:00:00.348) 0:00:10.194 ****** 2026-01-10 14:44:43.178158 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'ec0e548c7d9b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-10 14:42:41.475053', 'end': '2026-01-10 14:42:41.529123', 'delta': '0:00:00.054070', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['ec0e548c7d9b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-01-10 14:44:43.178165 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'a4699e1e8617', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-10 14:42:42.305159', 'end': '2026-01-10 14:42:42.340101', 'delta': '0:00:00.034942', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a4699e1e8617'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-01-10 14:44:43.178187 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '83ac5bb6fee6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-10 14:42:42.855441', 'end': '2026-01-10 14:42:42.893608', 'delta': '0:00:00.038167', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['83ac5bb6fee6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-01-10 14:44:43.178194 | orchestrator | 2026-01-10 14:44:43.178199 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-10 14:44:43.178205 | orchestrator | Saturday 10 January 2026 14:42:45 +0000 (0:00:00.208) 0:00:10.403 ****** 2026-01-10 14:44:43.178214 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:44:43.178220 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:44:43.178226 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:44:43.178231 | orchestrator | 2026-01-10 14:44:43.178237 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-10 14:44:43.178242 | orchestrator | Saturday 10 January 2026 14:42:45 +0000 (0:00:00.481) 0:00:10.884 ****** 2026-01-10 14:44:43.178248 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-01-10 14:44:43.178254 | orchestrator | 2026-01-10 14:44:43.178259 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-10 14:44:43.178264 | orchestrator | Saturday 10 January 2026 14:42:47 +0000 (0:00:01.803) 0:00:12.687 ****** 2026-01-10 14:44:43.178269 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:44:43.178274 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:44:43.178279 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:44:43.178284 | orchestrator | 2026-01-10 14:44:43.178290 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-10 14:44:43.178295 | orchestrator | Saturday 10 January 2026 14:42:47 +0000 (0:00:00.333) 0:00:13.021 ****** 2026-01-10 14:44:43.178301 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:44:43.178306 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:44:43.178312 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:44:43.178334 | orchestrator | 2026-01-10 14:44:43.178339 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-10 14:44:43.178345 | orchestrator | Saturday 10 January 2026 14:42:48 +0000 (0:00:00.388) 0:00:13.410 ****** 2026-01-10 14:44:43.178350 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:44:43.178356 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:44:43.178361 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:44:43.178367 | orchestrator | 2026-01-10 14:44:43.178373 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-10 14:44:43.178379 | orchestrator | Saturday 10 January 2026 14:42:48 +0000 (0:00:00.469) 0:00:13.880 ****** 2026-01-10 14:44:43.178385 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:44:43.178391 | orchestrator | 2026-01-10 14:44:43.178397 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-10 14:44:43.178403 | orchestrator | Saturday 10 January 2026 14:42:48 +0000 (0:00:00.148) 0:00:14.029 ****** 2026-01-10 14:44:43.178409 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:44:43.178415 | orchestrator | 2026-01-10 14:44:43.178421 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-10 14:44:43.178427 | orchestrator | Saturday 10 January 2026 14:42:48 +0000 (0:00:00.233) 0:00:14.262 ****** 2026-01-10 14:44:43.178433 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:44:43.178441 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:44:43.178447 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:44:43.178453 | orchestrator | 2026-01-10 14:44:43.178458 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-10 14:44:43.178464 | orchestrator | Saturday 10 January 2026 14:42:49 +0000 (0:00:00.304) 0:00:14.567 ****** 2026-01-10 14:44:43.178470 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:44:43.178476 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:44:43.178482 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:44:43.178488 | orchestrator | 2026-01-10 14:44:43.178494 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-10 14:44:43.178500 | orchestrator | Saturday 10 January 2026 14:42:49 +0000 (0:00:00.323) 0:00:14.891 ****** 2026-01-10 14:44:43.178506 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:44:43.178513 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:44:43.178519 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:44:43.178525 | orchestrator | 2026-01-10 14:44:43.178532 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-10 14:44:43.178538 | orchestrator | Saturday 10 January 2026 14:42:50 +0000 (0:00:00.556) 0:00:15.447 ****** 2026-01-10 14:44:43.178547 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:44:43.178553 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:44:43.178559 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:44:43.178565 | orchestrator | 2026-01-10 14:44:43.178571 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-10 14:44:43.178578 | orchestrator | Saturday 10 January 2026 14:42:50 +0000 (0:00:00.363) 0:00:15.811 ****** 2026-01-10 14:44:43.178584 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:44:43.178590 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:44:43.178596 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:44:43.178602 | orchestrator | 2026-01-10 14:44:43.178608 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-10 14:44:43.178614 | orchestrator | Saturday 10 January 2026 14:42:50 +0000 (0:00:00.329) 0:00:16.141 ****** 2026-01-10 14:44:43.178620 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:44:43.178627 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:44:43.178633 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:44:43.178656 | orchestrator | 2026-01-10 14:44:43.178663 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-10 14:44:43.178670 | orchestrator | Saturday 10 January 2026 14:42:51 +0000 (0:00:00.320) 0:00:16.462 ****** 2026-01-10 14:44:43.178676 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:44:43.178682 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:44:43.178688 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:44:43.178694 | orchestrator | 2026-01-10 14:44:43.178700 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-10 14:44:43.178706 | orchestrator | Saturday 10 January 2026 14:42:51 +0000 (0:00:00.527) 0:00:16.989 ****** 2026-01-10 14:44:43.178712 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--afcf3728--3a76--5607--aebb--61451d8643bd-osd--block--afcf3728--3a76--5607--aebb--61451d8643bd', 'dm-uuid-LVM-3EtkfyBxqllZGPVj4jX11hTg3QJalLi9ufUqhVZyz9vaMSCvbVz9QMTCeSCNfKHd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-10 14:44:43.178719 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7d69473f--eeb6--5b22--bf27--181ed9eac77f-osd--block--7d69473f--eeb6--5b22--bf27--181ed9eac77f', 'dm-uuid-LVM-rJQmIINvGCLvyiYBR7BPCsE0l9Ac7YGetpM2LEc5JOr63yjDWOuKcaTFCCdwRmte'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-10 14:44:43.178726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:44:43.178732 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:44:43.178741 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:44:43.178751 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:44:43.178758 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:44:43.178779 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:44:43.178787 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:44:43.178793 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:44:43.178803 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927', 'scsi-SQEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927-part1', 'scsi-SQEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927-part14', 'scsi-SQEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927-part15', 'scsi-SQEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927-part16', 'scsi-SQEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:44:43.178814 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8bd1ebb6--f1fa--58d8--b8a2--53a51729cfca-osd--block--8bd1ebb6--f1fa--58d8--b8a2--53a51729cfca', 'dm-uuid-LVM-2Fwxc5Ai0VKcdRkNWbyH7mgikuiNPcDqz4zN2NxBNprPiRAzwpTBwbkt5aHRRx46'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-10 14:44:43.178834 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--afcf3728--3a76--5607--aebb--61451d8643bd-osd--block--afcf3728--3a76--5607--aebb--61451d8643bd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9qr1pZ-RkHo-c3FE-UdXw-MB1l-GSnT-LlveUO', 'scsi-0QEMU_QEMU_HARDDISK_15b7e91d-d5fd-4068-ac98-0857e3d5fdf2', 'scsi-SQEMU_QEMU_HARDDISK_15b7e91d-d5fd-4068-ac98-0857e3d5fdf2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:44:43.178841 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d6926eeb--1396--512c--9972--e44f7d919ea4-osd--block--d6926eeb--1396--512c--9972--e44f7d919ea4', 'dm-uuid-LVM-TVbW5CMftcxSXy1c5xps3v2GvblaD84SboJE4C3svpS8uL1HxSuFZkgv8JDsZseN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-10 14:44:43.178847 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--7d69473f--eeb6--5b22--bf27--181ed9eac77f-osd--block--7d69473f--eeb6--5b22--bf27--181ed9eac77f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-huiRGz-aiBY-e8Ey-pv2n-4eFw-rHyE-ibrNbS', 'scsi-0QEMU_QEMU_HARDDISK_b1ccfdf8-8ebc-4581-b39b-71c057731eea', 'scsi-SQEMU_QEMU_HARDDISK_b1ccfdf8-8ebc-4581-b39b-71c057731eea'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:44:43.178853 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:44:43.178864 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c3c9ac61-c03e-421c-9f43-37b1f8399a84', 'scsi-SQEMU_QEMU_HARDDISK_c3c9ac61-c03e-421c-9f43-37b1f8399a84'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:44:43.178873 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:44:43.178879 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:44:43.178900 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:44:43.178907 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:44:43.178913 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:44:43.178919 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:44:43.178924 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:44:43.178930 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:44:43.178936 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:44:43.178950 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51', 'scsi-SQEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51-part1', 'scsi-SQEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51-part14', 'scsi-SQEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51-part15', 'scsi-SQEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51-part16', 'scsi-SQEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:44:43.178957 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--377cb61f--8fa6--58d2--888b--072b5e96ec0c-osd--block--377cb61f--8fa6--58d2--888b--072b5e96ec0c', 'dm-uuid-LVM-vZkjZSQHbS0q2GyNOh44hFZjUTSzvcamynRYm7ghd2xWRzADM7zfTvvhOTQ6ZkFq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-10 14:44:43.178963 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8bd1ebb6--f1fa--58d8--b8a2--53a51729cfca-osd--block--8bd1ebb6--f1fa--58d8--b8a2--53a51729cfca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-s2tKbb-biWW-rhor-AR6o-qWz0-GpRW-LWXLkj', 'scsi-0QEMU_QEMU_HARDDISK_44339de3-07bb-4d03-9d3b-2e0777e51af2', 'scsi-SQEMU_QEMU_HARDDISK_44339de3-07bb-4d03-9d3b-2e0777e51af2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:44:43.178969 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--82a5292d--e4f5--5675--b04e--23ddf5e1abb7-osd--block--82a5292d--e4f5--5675--b04e--23ddf5e1abb7', 'dm-uuid-LVM-DRgormTtPowM6Igp9Je7HfxfYSL52AtszM3oBsEeG5RiUP3wrwR8QJdi01POVrmD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-10 14:44:43.178981 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d6926eeb--1396--512c--9972--e44f7d919ea4-osd--block--d6926eeb--1396--512c--9972--e44f7d919ea4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JNgVMk-aGs6-lbeJ-KhY4-gEYt-H63q-Mfmq4b', 'scsi-0QEMU_QEMU_HARDDISK_09bcf5bb-a136-4209-bef3-37a648ec73be', 'scsi-SQEMU_QEMU_HARDDISK_09bcf5bb-a136-4209-bef3-37a648ec73be'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:44:43.178988 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:44:43.178997 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c56b24e3-125a-48ee-acc4-7420ce900c20', 'scsi-SQEMU_QEMU_HARDDISK_c56b24e3-125a-48ee-acc4-7420ce900c20'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:44:43.179003 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:44:43.179009 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:44:43.179015 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:44:43.179021 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:44:43.179027 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:44:43.179036 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:44:43.179044 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:44:43.179050 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:44:43.179056 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:44:43.179067 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456', 'scsi-SQEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456-part1', 'scsi-SQEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456-part14', 'scsi-SQEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456-part15', 'scsi-SQEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456-part16', 'scsi-SQEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:44:43.179076 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--377cb61f--8fa6--58d2--888b--072b5e96ec0c-osd--block--377cb61f--8fa6--58d2--888b--072b5e96ec0c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fDJB5p-jnnX-ZrSt-I40a-mAqp-Scoe-YsWpaI', 'scsi-0QEMU_QEMU_HARDDISK_f0ceade3-8439-47c2-ab29-dba6a2d0af37', 'scsi-SQEMU_QEMU_HARDDISK_f0ceade3-8439-47c2-ab29-dba6a2d0af37'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:44:43.179084 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--82a5292d--e4f5--5675--b04e--23ddf5e1abb7-osd--block--82a5292d--e4f5--5675--b04e--23ddf5e1abb7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qtVK5u-Tw8C-lwsP-BX4N-dhRe-17lE-qDl3gH', 'scsi-0QEMU_QEMU_HARDDISK_7172e707-12eb-4bf8-889d-ca95993faa89', 'scsi-SQEMU_QEMU_HARDDISK_7172e707-12eb-4bf8-889d-ca95993faa89'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:44:43.179090 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_573345f2-5167-4bb0-bd40-d392a39279fc', 'scsi-SQEMU_QEMU_HARDDISK_573345f2-5167-4bb0-bd40-d392a39279fc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:44:43.179099 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-46-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:44:43.179105 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:44:43.179111 | orchestrator | 2026-01-10 14:44:43.179116 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-10 14:44:43.179122 | orchestrator | Saturday 10 January 2026 14:42:52 +0000 (0:00:00.576) 0:00:17.566 ****** 2026-01-10 14:44:43.179128 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--afcf3728--3a76--5607--aebb--61451d8643bd-osd--block--afcf3728--3a76--5607--aebb--61451d8643bd', 'dm-uuid-LVM-3EtkfyBxqllZGPVj4jX11hTg3QJalLi9ufUqhVZyz9vaMSCvbVz9QMTCeSCNfKHd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179137 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7d69473f--eeb6--5b22--bf27--181ed9eac77f-osd--block--7d69473f--eeb6--5b22--bf27--181ed9eac77f', 'dm-uuid-LVM-rJQmIINvGCLvyiYBR7BPCsE0l9Ac7YGetpM2LEc5JOr63yjDWOuKcaTFCCdwRmte'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179145 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179151 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179157 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179166 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8bd1ebb6--f1fa--58d8--b8a2--53a51729cfca-osd--block--8bd1ebb6--f1fa--58d8--b8a2--53a51729cfca', 'dm-uuid-LVM-2Fwxc5Ai0VKcdRkNWbyH7mgikuiNPcDqz4zN2NxBNprPiRAzwpTBwbkt5aHRRx46'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179172 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179182 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d6926eeb--1396--512c--9972--e44f7d919ea4-osd--block--d6926eeb--1396--512c--9972--e44f7d919ea4', 'dm-uuid-LVM-TVbW5CMftcxSXy1c5xps3v2GvblaD84SboJE4C3svpS8uL1HxSuFZkgv8JDsZseN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179187 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179195 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179201 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179210 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179216 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179225 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179231 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179242 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179248 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179254 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179263 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179269 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927', 'scsi-SQEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927-part1', 'scsi-SQEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927-part14', 'scsi-SQEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927-part15', 'scsi-SQEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927-part16', 'scsi-SQEMU_QEMU_HARDDISK_de778ce0-4f6d-44be-b211-86e0cabeb927-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179279 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179345 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--afcf3728--3a76--5607--aebb--61451d8643bd-osd--block--afcf3728--3a76--5607--aebb--61451d8643bd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9qr1pZ-RkHo-c3FE-UdXw-MB1l-GSnT-LlveUO', 'scsi-0QEMU_QEMU_HARDDISK_15b7e91d-d5fd-4068-ac98-0857e3d5fdf2', 'scsi-SQEMU_QEMU_HARDDISK_15b7e91d-d5fd-4068-ac98-0857e3d5fdf2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179359 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51', 'scsi-SQEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51-part1', 'scsi-SQEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51-part14', 'scsi-SQEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51-part15', 'scsi-SQEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51-part16', 'scsi-SQEMU_QEMU_HARDDISK_e7f1afe6-f3aa-449f-9835-56bec5ec9c51-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179372 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--7d69473f--eeb6--5b22--bf27--181ed9eac77f-osd--block--7d69473f--eeb6--5b22--bf27--181ed9eac77f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-huiRGz-aiBY-e8Ey-pv2n-4eFw-rHyE-ibrNbS', 'scsi-0QEMU_QEMU_HARDDISK_b1ccfdf8-8ebc-4581-b39b-71c057731eea', 'scsi-SQEMU_QEMU_HARDDISK_b1ccfdf8-8ebc-4581-b39b-71c057731eea'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179383 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--377cb61f--8fa6--58d2--888b--072b5e96ec0c-osd--block--377cb61f--8fa6--58d2--888b--072b5e96ec0c', 'dm-uuid-LVM-vZkjZSQHbS0q2GyNOh44hFZjUTSzvcamynRYm7ghd2xWRzADM7zfTvvhOTQ6ZkFq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179390 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8bd1ebb6--f1fa--58d8--b8a2--53a51729cfca-osd--block--8bd1ebb6--f1fa--58d8--b8a2--53a51729cfca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-s2tKbb-biWW-rhor-AR6o-qWz0-GpRW-LWXLkj', 'scsi-0QEMU_QEMU_HARDDISK_44339de3-07bb-4d03-9d3b-2e0777e51af2', 'scsi-SQEMU_QEMU_HARDDISK_44339de3-07bb-4d03-9d3b-2e0777e51af2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179399 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c3c9ac61-c03e-421c-9f43-37b1f8399a84', 'scsi-SQEMU_QEMU_HARDDISK_c3c9ac61-c03e-421c-9f43-37b1f8399a84'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179407 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d6926eeb--1396--512c--9972--e44f7d919ea4-osd--block--d6926eeb--1396--512c--9972--e44f7d919ea4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JNgVMk-aGs6-lbeJ-KhY4-gEYt-H63q-Mfmq4b', 'scsi-0QEMU_QEMU_HARDDISK_09bcf5bb-a136-4209-bef3-37a648ec73be', 'scsi-SQEMU_QEMU_HARDDISK_09bcf5bb-a136-4209-bef3-37a648ec73be'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179413 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--82a5292d--e4f5--5675--b04e--23ddf5e1abb7-osd--block--82a5292d--e4f5--5675--b04e--23ddf5e1abb7', 'dm-uuid-LVM-DRgormTtPowM6Igp9Je7HfxfYSL52AtszM3oBsEeG5RiUP3wrwR8QJdi01POVrmD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179423 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179433 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:44:43.179439 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c56b24e3-125a-48ee-acc4-7420ce900c20', 'scsi-SQEMU_QEMU_HARDDISK_c56b24e3-125a-48ee-acc4-7420ce900c20'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179445 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179450 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179458 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:44:43.179464 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179470 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179480 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179489 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179495 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179501 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179509 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179518 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456', 'scsi-SQEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456-part1', 'scsi-SQEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456-part14', 'scsi-SQEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456-part15', 'scsi-SQEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456-part16', 'scsi-SQEMU_QEMU_HARDDISK_1f1914ea-fd0f-4c7c-b7b9-c351b421a456-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179528 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--377cb61f--8fa6--58d2--888b--072b5e96ec0c-osd--block--377cb61f--8fa6--58d2--888b--072b5e96ec0c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fDJB5p-jnnX-ZrSt-I40a-mAqp-Scoe-YsWpaI', 'scsi-0QEMU_QEMU_HARDDISK_f0ceade3-8439-47c2-ab29-dba6a2d0af37', 'scsi-SQEMU_QEMU_HARDDISK_f0ceade3-8439-47c2-ab29-dba6a2d0af37'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179536 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--82a5292d--e4f5--5675--b04e--23ddf5e1abb7-osd--block--82a5292d--e4f5--5675--b04e--23ddf5e1abb7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qtVK5u-Tw8C-lwsP-BX4N-dhRe-17lE-qDl3gH', 'scsi-0QEMU_QEMU_HARDDISK_7172e707-12eb-4bf8-889d-ca95993faa89', 'scsi-SQEMU_QEMU_HARDDISK_7172e707-12eb-4bf8-889d-ca95993faa89'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179542 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_573345f2-5167-4bb0-bd40-d392a39279fc', 'scsi-SQEMU_QEMU_HARDDISK_573345f2-5167-4bb0-bd40-d392a39279fc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179551 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-46-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:44:43.179560 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:44:43.179566 | orchestrator | 2026-01-10 14:44:43.179572 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-10 14:44:43.179578 | orchestrator | Saturday 10 January 2026 14:42:52 +0000 (0:00:00.624) 0:00:18.190 ****** 2026-01-10 14:44:43.179583 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:44:43.179589 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:44:43.179595 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:44:43.179600 | orchestrator | 2026-01-10 14:44:43.179606 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-10 14:44:43.179612 | orchestrator | Saturday 10 January 2026 14:42:53 +0000 (0:00:00.824) 0:00:19.014 ****** 2026-01-10 14:44:43.179618 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:44:43.179623 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:44:43.179629 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:44:43.179635 | orchestrator | 2026-01-10 14:44:43.179640 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-10 14:44:43.179644 | orchestrator | Saturday 10 January 2026 14:42:54 +0000 (0:00:00.532) 0:00:19.547 ****** 2026-01-10 14:44:43.179649 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:44:43.179654 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:44:43.179658 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:44:43.179663 | orchestrator | 2026-01-10 14:44:43.179668 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-10 14:44:43.179673 | orchestrator | Saturday 10 January 2026 14:42:55 +0000 (0:00:00.829) 0:00:20.376 ****** 2026-01-10 14:44:43.179677 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:44:43.179682 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:44:43.179687 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:44:43.179692 | orchestrator | 2026-01-10 14:44:43.179696 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-10 14:44:43.179701 | orchestrator | Saturday 10 January 2026 14:42:55 +0000 (0:00:00.323) 0:00:20.700 ****** 2026-01-10 14:44:43.179705 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:44:43.179711 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:44:43.179716 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:44:43.179722 | orchestrator | 2026-01-10 14:44:43.179727 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-10 14:44:43.179733 | orchestrator | Saturday 10 January 2026 14:42:55 +0000 (0:00:00.412) 0:00:21.113 ****** 2026-01-10 14:44:43.179738 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:44:43.179744 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:44:43.179750 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:44:43.179755 | orchestrator | 2026-01-10 14:44:43.179761 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-10 14:44:43.179767 | orchestrator | Saturday 10 January 2026 14:42:56 +0000 (0:00:00.549) 0:00:21.662 ****** 2026-01-10 14:44:43.179773 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-10 14:44:43.179779 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-10 14:44:43.179784 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-10 14:44:43.179790 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-10 14:44:43.179795 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-10 14:44:43.179801 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-10 14:44:43.179807 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-10 14:44:43.179819 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-10 14:44:43.179824 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-10 14:44:43.179830 | orchestrator | 2026-01-10 14:44:43.179836 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-10 14:44:43.179842 | orchestrator | Saturday 10 January 2026 14:42:57 +0000 (0:00:00.889) 0:00:22.551 ****** 2026-01-10 14:44:43.179847 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-10 14:44:43.179853 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-10 14:44:43.179858 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-10 14:44:43.179864 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:44:43.179870 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-10 14:44:43.179876 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-10 14:44:43.179882 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-10 14:44:43.179887 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:44:43.179893 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-10 14:44:43.179899 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-10 14:44:43.179904 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-10 14:44:43.179910 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:44:43.179915 | orchestrator | 2026-01-10 14:44:43.179921 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-10 14:44:43.179927 | orchestrator | Saturday 10 January 2026 14:42:57 +0000 (0:00:00.397) 0:00:22.949 ****** 2026-01-10 14:44:43.179933 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:44:43.179938 | orchestrator | 2026-01-10 14:44:43.179944 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-10 14:44:43.179950 | orchestrator | Saturday 10 January 2026 14:42:58 +0000 (0:00:00.724) 0:00:23.673 ****** 2026-01-10 14:44:43.179959 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:44:43.179965 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:44:43.179970 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:44:43.179976 | orchestrator | 2026-01-10 14:44:43.179982 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-10 14:44:43.179987 | orchestrator | Saturday 10 January 2026 14:42:58 +0000 (0:00:00.325) 0:00:23.999 ****** 2026-01-10 14:44:43.179993 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:44:43.179998 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:44:43.180004 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:44:43.180010 | orchestrator | 2026-01-10 14:44:43.180015 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-10 14:44:43.180021 | orchestrator | Saturday 10 January 2026 14:42:59 +0000 (0:00:00.333) 0:00:24.333 ****** 2026-01-10 14:44:43.180027 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:44:43.180033 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:44:43.180038 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:44:43.180044 | orchestrator | 2026-01-10 14:44:43.180049 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-10 14:44:43.180055 | orchestrator | Saturday 10 January 2026 14:42:59 +0000 (0:00:00.312) 0:00:24.646 ****** 2026-01-10 14:44:43.180061 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:44:43.180067 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:44:43.180073 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:44:43.180078 | orchestrator | 2026-01-10 14:44:43.180084 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-10 14:44:43.180089 | orchestrator | Saturday 10 January 2026 14:43:00 +0000 (0:00:00.933) 0:00:25.579 ****** 2026-01-10 14:44:43.180095 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:44:43.180101 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:44:43.180111 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:44:43.180117 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:44:43.180123 | orchestrator | 2026-01-10 14:44:43.180129 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-10 14:44:43.180134 | orchestrator | Saturday 10 January 2026 14:43:00 +0000 (0:00:00.417) 0:00:25.997 ****** 2026-01-10 14:44:43.180140 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:44:43.180146 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:44:43.180153 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:44:43.180158 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:44:43.180164 | orchestrator | 2026-01-10 14:44:43.180170 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-10 14:44:43.180175 | orchestrator | Saturday 10 January 2026 14:43:01 +0000 (0:00:00.432) 0:00:26.430 ****** 2026-01-10 14:44:43.180181 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:44:43.180187 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:44:43.180192 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:44:43.180198 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:44:43.180204 | orchestrator | 2026-01-10 14:44:43.180209 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-10 14:44:43.180215 | orchestrator | Saturday 10 January 2026 14:43:01 +0000 (0:00:00.391) 0:00:26.821 ****** 2026-01-10 14:44:43.180221 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:44:43.180226 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:44:43.180232 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:44:43.180238 | orchestrator | 2026-01-10 14:44:43.180243 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-10 14:44:43.180249 | orchestrator | Saturday 10 January 2026 14:43:01 +0000 (0:00:00.346) 0:00:27.168 ****** 2026-01-10 14:44:43.180255 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-10 14:44:43.180261 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-10 14:44:43.180267 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-10 14:44:43.180272 | orchestrator | 2026-01-10 14:44:43.180280 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-10 14:44:43.180286 | orchestrator | Saturday 10 January 2026 14:43:02 +0000 (0:00:00.499) 0:00:27.667 ****** 2026-01-10 14:44:43.180292 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-10 14:44:43.180298 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-10 14:44:43.180304 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-10 14:44:43.180310 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-10 14:44:43.180330 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-10 14:44:43.180336 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-10 14:44:43.180342 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-10 14:44:43.180347 | orchestrator | 2026-01-10 14:44:43.180353 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-10 14:44:43.180359 | orchestrator | Saturday 10 January 2026 14:43:03 +0000 (0:00:00.990) 0:00:28.658 ****** 2026-01-10 14:44:43.180364 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-10 14:44:43.180370 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-10 14:44:43.180376 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-10 14:44:43.180382 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-10 14:44:43.180387 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-10 14:44:43.180396 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-10 14:44:43.180406 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-10 14:44:43.180411 | orchestrator | 2026-01-10 14:44:43.180417 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-01-10 14:44:43.180423 | orchestrator | Saturday 10 January 2026 14:43:05 +0000 (0:00:02.075) 0:00:30.733 ****** 2026-01-10 14:44:43.180428 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:44:43.180433 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:44:43.180439 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-01-10 14:44:43.180445 | orchestrator | 2026-01-10 14:44:43.180450 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-01-10 14:44:43.180456 | orchestrator | Saturday 10 January 2026 14:43:05 +0000 (0:00:00.375) 0:00:31.109 ****** 2026-01-10 14:44:43.180462 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-10 14:44:43.180470 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-10 14:44:43.180476 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-10 14:44:43.180482 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-10 14:44:43.180487 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-10 14:44:43.180493 | orchestrator | 2026-01-10 14:44:43.180498 | orchestrator | TASK [generate keys] *********************************************************** 2026-01-10 14:44:43.180504 | orchestrator | Saturday 10 January 2026 14:43:50 +0000 (0:00:44.302) 0:01:15.412 ****** 2026-01-10 14:44:43.180510 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:44:43.180515 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:44:43.180521 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:44:43.180527 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:44:43.180533 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:44:43.180541 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:44:43.180548 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-01-10 14:44:43.180554 | orchestrator | 2026-01-10 14:44:43.180559 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-01-10 14:44:43.180565 | orchestrator | Saturday 10 January 2026 14:44:12 +0000 (0:00:22.214) 0:01:37.626 ****** 2026-01-10 14:44:43.180570 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:44:43.180576 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:44:43.180585 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:44:43.180591 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:44:43.180597 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:44:43.180603 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:44:43.180608 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-10 14:44:43.180614 | orchestrator | 2026-01-10 14:44:43.180620 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-01-10 14:44:43.180626 | orchestrator | Saturday 10 January 2026 14:44:24 +0000 (0:00:12.526) 0:01:50.152 ****** 2026-01-10 14:44:43.180631 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:44:43.180637 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-10 14:44:43.180642 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-10 14:44:43.180648 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:44:43.180653 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-10 14:44:43.180663 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-10 14:44:43.180669 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:44:43.180674 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-10 14:44:43.180680 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-10 14:44:43.180686 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:44:43.180692 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-10 14:44:43.180697 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-10 14:44:43.180703 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:44:43.180708 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-10 14:44:43.180714 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-10 14:44:43.180719 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:44:43.180725 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-10 14:44:43.180730 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-10 14:44:43.180734 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-01-10 14:44:43.180739 | orchestrator | 2026-01-10 14:44:43.180744 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:44:43.180750 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-10 14:44:43.180757 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-01-10 14:44:43.180763 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-10 14:44:43.180768 | orchestrator | 2026-01-10 14:44:43.180774 | orchestrator | 2026-01-10 14:44:43.180779 | orchestrator | 2026-01-10 14:44:43.180785 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:44:43.180791 | orchestrator | Saturday 10 January 2026 14:44:42 +0000 (0:00:17.768) 0:02:07.921 ****** 2026-01-10 14:44:43.180797 | orchestrator | =============================================================================== 2026-01-10 14:44:43.180808 | orchestrator | create openstack pool(s) ----------------------------------------------- 44.30s 2026-01-10 14:44:43.180813 | orchestrator | generate keys ---------------------------------------------------------- 22.21s 2026-01-10 14:44:43.180819 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.77s 2026-01-10 14:44:43.180824 | orchestrator | get keys from monitors ------------------------------------------------- 12.53s 2026-01-10 14:44:43.180830 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.28s 2026-01-10 14:44:43.180836 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.08s 2026-01-10 14:44:43.180842 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.80s 2026-01-10 14:44:43.180847 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.99s 2026-01-10 14:44:43.180855 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.93s 2026-01-10 14:44:43.180861 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.89s 2026-01-10 14:44:43.180866 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.89s 2026-01-10 14:44:43.180872 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.83s 2026-01-10 14:44:43.180878 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.82s 2026-01-10 14:44:43.180883 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.82s 2026-01-10 14:44:43.180889 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.72s 2026-01-10 14:44:43.180895 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.67s 2026-01-10 14:44:43.180900 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.67s 2026-01-10 14:44:43.180906 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.67s 2026-01-10 14:44:43.180911 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.64s 2026-01-10 14:44:43.180917 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.62s 2026-01-10 14:44:43.180922 | orchestrator | 2026-01-10 14:44:43 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:44:43.180928 | orchestrator | 2026-01-10 14:44:43 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:46.229702 | orchestrator | 2026-01-10 14:44:46 | INFO  | Task c6504c42-a5f0-4a43-90e4-7aa777c16b28 is in state STARTED 2026-01-10 14:44:46.231737 | orchestrator | 2026-01-10 14:44:46 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:44:46.234633 | orchestrator | 2026-01-10 14:44:46 | INFO  | Task 2c603c40-4b51-4015-b9f3-187d335fe013 is in state STARTED 2026-01-10 14:44:46.234684 | orchestrator | 2026-01-10 14:44:46 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:49.287679 | orchestrator | 2026-01-10 14:44:49 | INFO  | Task c6504c42-a5f0-4a43-90e4-7aa777c16b28 is in state STARTED 2026-01-10 14:44:49.289304 | orchestrator | 2026-01-10 14:44:49 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:44:49.290488 | orchestrator | 2026-01-10 14:44:49 | INFO  | Task 2c603c40-4b51-4015-b9f3-187d335fe013 is in state STARTED 2026-01-10 14:44:49.290519 | orchestrator | 2026-01-10 14:44:49 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:52.330767 | orchestrator | 2026-01-10 14:44:52 | INFO  | Task c6504c42-a5f0-4a43-90e4-7aa777c16b28 is in state STARTED 2026-01-10 14:44:52.333836 | orchestrator | 2026-01-10 14:44:52 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:44:52.335434 | orchestrator | 2026-01-10 14:44:52 | INFO  | Task 2c603c40-4b51-4015-b9f3-187d335fe013 is in state STARTED 2026-01-10 14:44:52.335515 | orchestrator | 2026-01-10 14:44:52 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:55.384616 | orchestrator | 2026-01-10 14:44:55 | INFO  | Task c6504c42-a5f0-4a43-90e4-7aa777c16b28 is in state STARTED 2026-01-10 14:44:55.388120 | orchestrator | 2026-01-10 14:44:55 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:44:55.390554 | orchestrator | 2026-01-10 14:44:55 | INFO  | Task 2c603c40-4b51-4015-b9f3-187d335fe013 is in state STARTED 2026-01-10 14:44:55.390604 | orchestrator | 2026-01-10 14:44:55 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:58.435988 | orchestrator | 2026-01-10 14:44:58 | INFO  | Task c6504c42-a5f0-4a43-90e4-7aa777c16b28 is in state STARTED 2026-01-10 14:44:58.438178 | orchestrator | 2026-01-10 14:44:58 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:44:58.440119 | orchestrator | 2026-01-10 14:44:58 | INFO  | Task 2c603c40-4b51-4015-b9f3-187d335fe013 is in state STARTED 2026-01-10 14:44:58.440162 | orchestrator | 2026-01-10 14:44:58 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:01.485560 | orchestrator | 2026-01-10 14:45:01 | INFO  | Task c6504c42-a5f0-4a43-90e4-7aa777c16b28 is in state STARTED 2026-01-10 14:45:01.486983 | orchestrator | 2026-01-10 14:45:01 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:45:01.488901 | orchestrator | 2026-01-10 14:45:01 | INFO  | Task 2c603c40-4b51-4015-b9f3-187d335fe013 is in state STARTED 2026-01-10 14:45:01.489100 | orchestrator | 2026-01-10 14:45:01 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:04.555583 | orchestrator | 2026-01-10 14:45:04 | INFO  | Task c6504c42-a5f0-4a43-90e4-7aa777c16b28 is in state STARTED 2026-01-10 14:45:04.555639 | orchestrator | 2026-01-10 14:45:04 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:45:04.556491 | orchestrator | 2026-01-10 14:45:04 | INFO  | Task 2c603c40-4b51-4015-b9f3-187d335fe013 is in state STARTED 2026-01-10 14:45:04.557031 | orchestrator | 2026-01-10 14:45:04 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:07.620646 | orchestrator | 2026-01-10 14:45:07 | INFO  | Task c6504c42-a5f0-4a43-90e4-7aa777c16b28 is in state SUCCESS 2026-01-10 14:45:07.622145 | orchestrator | 2026-01-10 14:45:07.622200 | orchestrator | 2026-01-10 14:45:07.622400 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:45:07.622413 | orchestrator | 2026-01-10 14:45:07.622424 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:45:07.622434 | orchestrator | Saturday 10 January 2026 14:43:30 +0000 (0:00:00.260) 0:00:00.260 ****** 2026-01-10 14:45:07.622460 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:45:07.622471 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:45:07.622480 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:45:07.622490 | orchestrator | 2026-01-10 14:45:07.622500 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:45:07.622509 | orchestrator | Saturday 10 January 2026 14:43:30 +0000 (0:00:00.310) 0:00:00.570 ****** 2026-01-10 14:45:07.622518 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-01-10 14:45:07.622528 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-01-10 14:45:07.622534 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-01-10 14:45:07.622540 | orchestrator | 2026-01-10 14:45:07.622546 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-01-10 14:45:07.622551 | orchestrator | 2026-01-10 14:45:07.622557 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-10 14:45:07.622562 | orchestrator | Saturday 10 January 2026 14:43:31 +0000 (0:00:00.450) 0:00:01.020 ****** 2026-01-10 14:45:07.622568 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:45:07.622590 | orchestrator | 2026-01-10 14:45:07.622596 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-01-10 14:45:07.622601 | orchestrator | Saturday 10 January 2026 14:43:31 +0000 (0:00:00.515) 0:00:01.536 ****** 2026-01-10 14:45:07.622611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-10 14:45:07.622639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-10 14:45:07.622651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-10 14:45:07.622659 | orchestrator | 2026-01-10 14:45:07.622669 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-01-10 14:45:07.622682 | orchestrator | Saturday 10 January 2026 14:43:33 +0000 (0:00:01.354) 0:00:02.891 ****** 2026-01-10 14:45:07.622696 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:45:07.622705 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:45:07.622714 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:45:07.622722 | orchestrator | 2026-01-10 14:45:07.622731 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-10 14:45:07.622740 | orchestrator | Saturday 10 January 2026 14:43:33 +0000 (0:00:00.479) 0:00:03.370 ****** 2026-01-10 14:45:07.622757 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-10 14:45:07.622767 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-10 14:45:07.622776 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-01-10 14:45:07.622785 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-01-10 14:45:07.622794 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-01-10 14:45:07.622809 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-01-10 14:45:07.622819 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-01-10 14:45:07.622828 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-01-10 14:45:07.622838 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-10 14:45:07.622848 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-10 14:45:07.622857 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-01-10 14:45:07.622865 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-01-10 14:45:07.622874 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-01-10 14:45:07.622884 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-01-10 14:45:07.622893 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-01-10 14:45:07.622902 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-01-10 14:45:07.622910 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-10 14:45:07.622918 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-10 14:45:07.622927 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-01-10 14:45:07.622934 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-01-10 14:45:07.622941 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-01-10 14:45:07.622949 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-01-10 14:45:07.622957 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-01-10 14:45:07.622964 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-01-10 14:45:07.622973 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-01-10 14:45:07.622983 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-01-10 14:45:07.622991 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-01-10 14:45:07.622999 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-01-10 14:45:07.623008 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-01-10 14:45:07.623016 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-01-10 14:45:07.623025 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-01-10 14:45:07.623033 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-01-10 14:45:07.623041 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-01-10 14:45:07.623051 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-01-10 14:45:07.623068 | orchestrator | 2026-01-10 14:45:07.623078 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-10 14:45:07.623091 | orchestrator | Saturday 10 January 2026 14:43:34 +0000 (0:00:00.797) 0:00:04.167 ****** 2026-01-10 14:45:07.623100 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:45:07.623109 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:45:07.623118 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:45:07.623127 | orchestrator | 2026-01-10 14:45:07.623135 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-10 14:45:07.623144 | orchestrator | Saturday 10 January 2026 14:43:34 +0000 (0:00:00.303) 0:00:04.471 ****** 2026-01-10 14:45:07.623158 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:07.623168 | orchestrator | 2026-01-10 14:45:07.623177 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-10 14:45:07.623186 | orchestrator | Saturday 10 January 2026 14:43:34 +0000 (0:00:00.129) 0:00:04.601 ****** 2026-01-10 14:45:07.623195 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:07.623205 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:45:07.623213 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:45:07.623222 | orchestrator | 2026-01-10 14:45:07.623231 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-10 14:45:07.623238 | orchestrator | Saturday 10 January 2026 14:43:35 +0000 (0:00:00.537) 0:00:05.138 ****** 2026-01-10 14:45:07.623380 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:45:07.623391 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:45:07.623399 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:45:07.623408 | orchestrator | 2026-01-10 14:45:07.623417 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-10 14:45:07.623425 | orchestrator | Saturday 10 January 2026 14:43:35 +0000 (0:00:00.292) 0:00:05.431 ****** 2026-01-10 14:45:07.623432 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:07.623439 | orchestrator | 2026-01-10 14:45:07.623447 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-10 14:45:07.623454 | orchestrator | Saturday 10 January 2026 14:43:35 +0000 (0:00:00.142) 0:00:05.574 ****** 2026-01-10 14:45:07.623462 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:07.623470 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:45:07.623478 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:45:07.623487 | orchestrator | 2026-01-10 14:45:07.623494 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-10 14:45:07.623501 | orchestrator | Saturday 10 January 2026 14:43:36 +0000 (0:00:00.323) 0:00:05.897 ****** 2026-01-10 14:45:07.623509 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:45:07.623517 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:45:07.623525 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:45:07.623534 | orchestrator | 2026-01-10 14:45:07.623543 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-10 14:45:07.623550 | orchestrator | Saturday 10 January 2026 14:43:36 +0000 (0:00:00.303) 0:00:06.200 ****** 2026-01-10 14:45:07.623558 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:07.623568 | orchestrator | 2026-01-10 14:45:07.623577 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-10 14:45:07.623586 | orchestrator | Saturday 10 January 2026 14:43:36 +0000 (0:00:00.312) 0:00:06.513 ****** 2026-01-10 14:45:07.623595 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:07.623604 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:45:07.623613 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:45:07.623622 | orchestrator | 2026-01-10 14:45:07.623632 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-10 14:45:07.623641 | orchestrator | Saturday 10 January 2026 14:43:37 +0000 (0:00:00.295) 0:00:06.808 ****** 2026-01-10 14:45:07.623650 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:45:07.623659 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:45:07.623668 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:45:07.623677 | orchestrator | 2026-01-10 14:45:07.623698 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-10 14:45:07.623707 | orchestrator | Saturday 10 January 2026 14:43:37 +0000 (0:00:00.356) 0:00:07.165 ****** 2026-01-10 14:45:07.623717 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:07.623726 | orchestrator | 2026-01-10 14:45:07.623735 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-10 14:45:07.623745 | orchestrator | Saturday 10 January 2026 14:43:37 +0000 (0:00:00.143) 0:00:07.308 ****** 2026-01-10 14:45:07.623751 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:07.623757 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:45:07.623762 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:45:07.623767 | orchestrator | 2026-01-10 14:45:07.623773 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-10 14:45:07.623778 | orchestrator | Saturday 10 January 2026 14:43:37 +0000 (0:00:00.345) 0:00:07.654 ****** 2026-01-10 14:45:07.623784 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:45:07.623789 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:45:07.623795 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:45:07.623800 | orchestrator | 2026-01-10 14:45:07.623805 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-10 14:45:07.623811 | orchestrator | Saturday 10 January 2026 14:43:38 +0000 (0:00:00.526) 0:00:08.181 ****** 2026-01-10 14:45:07.623816 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:07.623822 | orchestrator | 2026-01-10 14:45:07.623827 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-10 14:45:07.623833 | orchestrator | Saturday 10 January 2026 14:43:38 +0000 (0:00:00.140) 0:00:08.321 ****** 2026-01-10 14:45:07.623838 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:07.623843 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:45:07.623849 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:45:07.623854 | orchestrator | 2026-01-10 14:45:07.623860 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-10 14:45:07.623865 | orchestrator | Saturday 10 January 2026 14:43:38 +0000 (0:00:00.314) 0:00:08.636 ****** 2026-01-10 14:45:07.623870 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:45:07.623876 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:45:07.623881 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:45:07.623886 | orchestrator | 2026-01-10 14:45:07.623892 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-10 14:45:07.623897 | orchestrator | Saturday 10 January 2026 14:43:39 +0000 (0:00:00.338) 0:00:08.974 ****** 2026-01-10 14:45:07.623902 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:07.623908 | orchestrator | 2026-01-10 14:45:07.623920 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-10 14:45:07.623929 | orchestrator | Saturday 10 January 2026 14:43:39 +0000 (0:00:00.135) 0:00:09.110 ****** 2026-01-10 14:45:07.623941 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:07.623954 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:45:07.623964 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:45:07.623972 | orchestrator | 2026-01-10 14:45:07.623980 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-10 14:45:07.624001 | orchestrator | Saturday 10 January 2026 14:43:39 +0000 (0:00:00.289) 0:00:09.400 ****** 2026-01-10 14:45:07.624011 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:45:07.624021 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:45:07.624030 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:45:07.624040 | orchestrator | 2026-01-10 14:45:07.624049 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-10 14:45:07.624060 | orchestrator | Saturday 10 January 2026 14:43:40 +0000 (0:00:00.642) 0:00:10.042 ****** 2026-01-10 14:45:07.624070 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:07.624080 | orchestrator | 2026-01-10 14:45:07.624090 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-10 14:45:07.624100 | orchestrator | Saturday 10 January 2026 14:43:40 +0000 (0:00:00.140) 0:00:10.183 ****** 2026-01-10 14:45:07.624115 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:07.624122 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:45:07.624128 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:45:07.624134 | orchestrator | 2026-01-10 14:45:07.624140 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-10 14:45:07.624147 | orchestrator | Saturday 10 January 2026 14:43:40 +0000 (0:00:00.287) 0:00:10.470 ****** 2026-01-10 14:45:07.624153 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:45:07.624159 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:45:07.624166 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:45:07.624172 | orchestrator | 2026-01-10 14:45:07.624178 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-10 14:45:07.624184 | orchestrator | Saturday 10 January 2026 14:43:41 +0000 (0:00:00.367) 0:00:10.837 ****** 2026-01-10 14:45:07.624191 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:07.624197 | orchestrator | 2026-01-10 14:45:07.624204 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-10 14:45:07.624210 | orchestrator | Saturday 10 January 2026 14:43:41 +0000 (0:00:00.136) 0:00:10.974 ****** 2026-01-10 14:45:07.624216 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:07.624222 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:45:07.624229 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:45:07.624235 | orchestrator | 2026-01-10 14:45:07.624241 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-10 14:45:07.624247 | orchestrator | Saturday 10 January 2026 14:43:41 +0000 (0:00:00.515) 0:00:11.489 ****** 2026-01-10 14:45:07.624253 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:45:07.624260 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:45:07.624266 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:45:07.624272 | orchestrator | 2026-01-10 14:45:07.624279 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-10 14:45:07.624285 | orchestrator | Saturday 10 January 2026 14:43:42 +0000 (0:00:00.328) 0:00:11.818 ****** 2026-01-10 14:45:07.624310 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:07.624318 | orchestrator | 2026-01-10 14:45:07.624325 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-10 14:45:07.624331 | orchestrator | Saturday 10 January 2026 14:43:42 +0000 (0:00:00.156) 0:00:11.975 ****** 2026-01-10 14:45:07.624337 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:07.624343 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:45:07.624349 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:45:07.624355 | orchestrator | 2026-01-10 14:45:07.624362 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-10 14:45:07.624368 | orchestrator | Saturday 10 January 2026 14:43:42 +0000 (0:00:00.331) 0:00:12.306 ****** 2026-01-10 14:45:07.624374 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:45:07.624380 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:45:07.624386 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:45:07.624392 | orchestrator | 2026-01-10 14:45:07.624397 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-10 14:45:07.624402 | orchestrator | Saturday 10 January 2026 14:43:43 +0000 (0:00:00.431) 0:00:12.738 ****** 2026-01-10 14:45:07.624408 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:07.624449 | orchestrator | 2026-01-10 14:45:07.624455 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-10 14:45:07.624461 | orchestrator | Saturday 10 January 2026 14:43:43 +0000 (0:00:00.177) 0:00:12.916 ****** 2026-01-10 14:45:07.624466 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:07.624472 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:45:07.624478 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:45:07.624483 | orchestrator | 2026-01-10 14:45:07.624489 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-01-10 14:45:07.624494 | orchestrator | Saturday 10 January 2026 14:43:43 +0000 (0:00:00.567) 0:00:13.484 ****** 2026-01-10 14:45:07.624505 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:45:07.624510 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:45:07.624516 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:45:07.624521 | orchestrator | 2026-01-10 14:45:07.624526 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-01-10 14:45:07.624532 | orchestrator | Saturday 10 January 2026 14:43:45 +0000 (0:00:01.759) 0:00:15.244 ****** 2026-01-10 14:45:07.624537 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-10 14:45:07.624543 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-10 14:45:07.624549 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-10 14:45:07.624554 | orchestrator | 2026-01-10 14:45:07.624560 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-01-10 14:45:07.624565 | orchestrator | Saturday 10 January 2026 14:43:47 +0000 (0:00:01.912) 0:00:17.156 ****** 2026-01-10 14:45:07.624576 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-10 14:45:07.624582 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-10 14:45:07.624589 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-10 14:45:07.624595 | orchestrator | 2026-01-10 14:45:07.624605 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-01-10 14:45:07.624611 | orchestrator | Saturday 10 January 2026 14:43:49 +0000 (0:00:02.283) 0:00:19.440 ****** 2026-01-10 14:45:07.624617 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-10 14:45:07.624622 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-10 14:45:07.624628 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-10 14:45:07.624634 | orchestrator | 2026-01-10 14:45:07.624639 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-01-10 14:45:07.624645 | orchestrator | Saturday 10 January 2026 14:43:51 +0000 (0:00:02.066) 0:00:21.507 ****** 2026-01-10 14:45:07.624650 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:07.624656 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:45:07.624661 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:45:07.624670 | orchestrator | 2026-01-10 14:45:07.624679 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-01-10 14:45:07.624692 | orchestrator | Saturday 10 January 2026 14:43:52 +0000 (0:00:00.308) 0:00:21.816 ****** 2026-01-10 14:45:07.624704 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:07.624712 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:45:07.624720 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:45:07.624728 | orchestrator | 2026-01-10 14:45:07.624737 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-10 14:45:07.624745 | orchestrator | Saturday 10 January 2026 14:43:52 +0000 (0:00:00.321) 0:00:22.137 ****** 2026-01-10 14:45:07.624753 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:45:07.624762 | orchestrator | 2026-01-10 14:45:07.624772 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-01-10 14:45:07.624781 | orchestrator | Saturday 10 January 2026 14:43:53 +0000 (0:00:00.815) 0:00:22.953 ****** 2026-01-10 14:45:07.624793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-10 14:45:07.624824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-10 14:45:07.624845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-10 14:45:07.624861 | orchestrator | 2026-01-10 14:45:07.624869 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-01-10 14:45:07.624878 | orchestrator | Saturday 10 January 2026 14:43:54 +0000 (0:00:01.546) 0:00:24.500 ****** 2026-01-10 14:45:07.624893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-10 14:45:07.624909 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:07.624929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-10 14:45:07.624939 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:45:07.624949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-10 14:45:07.624964 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:45:07.624973 | orchestrator | 2026-01-10 14:45:07.624982 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-01-10 14:45:07.624991 | orchestrator | Saturday 10 January 2026 14:43:55 +0000 (0:00:00.753) 0:00:25.254 ****** 2026-01-10 14:45:07.625010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-10 14:45:07.625020 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:07.625031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-10 14:45:07.625045 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:45:07.625063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-10 14:45:07.625074 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:45:07.625083 | orchestrator | 2026-01-10 14:45:07.625092 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-01-10 14:45:07.625105 | orchestrator | Saturday 10 January 2026 14:43:56 +0000 (0:00:00.872) 0:00:26.127 ****** 2026-01-10 14:45:07.625114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-10 14:45:07.625134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-10 14:45:07.625150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-10 14:45:07.625160 | orchestrator | 2026-01-10 14:45:07.625170 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-10 14:45:07.625182 | orchestrator | Saturday 10 January 2026 14:43:58 +0000 (0:00:01.592) 0:00:27.719 ****** 2026-01-10 14:45:07.625190 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:07.625200 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:45:07.625208 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:45:07.625218 | orchestrator | 2026-01-10 14:45:07.625227 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-10 14:45:07.625240 | orchestrator | Saturday 10 January 2026 14:43:58 +0000 (0:00:00.316) 0:00:28.035 ****** 2026-01-10 14:45:07.625250 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:45:07.625258 | orchestrator | 2026-01-10 14:45:07.625266 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-01-10 14:45:07.625275 | orchestrator | Saturday 10 January 2026 14:43:58 +0000 (0:00:00.623) 0:00:28.659 ****** 2026-01-10 14:45:07.625285 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:45:07.625381 | orchestrator | 2026-01-10 14:45:07.625393 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-01-10 14:45:07.625402 | orchestrator | Saturday 10 January 2026 14:44:02 +0000 (0:00:03.196) 0:00:31.856 ****** 2026-01-10 14:45:07.625411 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:45:07.625474 | orchestrator | 2026-01-10 14:45:07.625487 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-01-10 14:45:07.625497 | orchestrator | Saturday 10 January 2026 14:44:05 +0000 (0:00:03.038) 0:00:34.894 ****** 2026-01-10 14:45:07.625507 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:45:07.625517 | orchestrator | 2026-01-10 14:45:07.625522 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-10 14:45:07.625528 | orchestrator | Saturday 10 January 2026 14:44:20 +0000 (0:00:15.774) 0:00:50.669 ****** 2026-01-10 14:45:07.625533 | orchestrator | 2026-01-10 14:45:07.625539 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-10 14:45:07.625544 | orchestrator | Saturday 10 January 2026 14:44:21 +0000 (0:00:00.078) 0:00:50.748 ****** 2026-01-10 14:45:07.625550 | orchestrator | 2026-01-10 14:45:07.625555 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-10 14:45:07.625560 | orchestrator | Saturday 10 January 2026 14:44:21 +0000 (0:00:00.066) 0:00:50.815 ****** 2026-01-10 14:45:07.625566 | orchestrator | 2026-01-10 14:45:07.625571 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-01-10 14:45:07.625577 | orchestrator | Saturday 10 January 2026 14:44:21 +0000 (0:00:00.082) 0:00:50.897 ****** 2026-01-10 14:45:07.625582 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:45:07.625588 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:45:07.625593 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:45:07.625598 | orchestrator | 2026-01-10 14:45:07.625604 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:45:07.625610 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-10 14:45:07.625617 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-01-10 14:45:07.625623 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-01-10 14:45:07.625628 | orchestrator | 2026-01-10 14:45:07.625634 | orchestrator | 2026-01-10 14:45:07.625639 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:45:07.625645 | orchestrator | Saturday 10 January 2026 14:45:05 +0000 (0:00:44.437) 0:01:35.335 ****** 2026-01-10 14:45:07.625650 | orchestrator | =============================================================================== 2026-01-10 14:45:07.625655 | orchestrator | horizon : Restart horizon container ------------------------------------ 44.44s 2026-01-10 14:45:07.625661 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.77s 2026-01-10 14:45:07.625666 | orchestrator | horizon : Creating Horizon database ------------------------------------- 3.20s 2026-01-10 14:45:07.625671 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 3.04s 2026-01-10 14:45:07.625676 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.28s 2026-01-10 14:45:07.625682 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.07s 2026-01-10 14:45:07.625687 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.91s 2026-01-10 14:45:07.625695 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.76s 2026-01-10 14:45:07.625703 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.59s 2026-01-10 14:45:07.625708 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.55s 2026-01-10 14:45:07.625714 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.36s 2026-01-10 14:45:07.625719 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.87s 2026-01-10 14:45:07.625725 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.82s 2026-01-10 14:45:07.625730 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.80s 2026-01-10 14:45:07.625740 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.75s 2026-01-10 14:45:07.625746 | orchestrator | horizon : Update policy file name --------------------------------------- 0.64s 2026-01-10 14:45:07.625751 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.62s 2026-01-10 14:45:07.625757 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.57s 2026-01-10 14:45:07.625767 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.54s 2026-01-10 14:45:07.625773 | orchestrator | horizon : Update policy file name --------------------------------------- 0.53s 2026-01-10 14:45:07.625778 | orchestrator | 2026-01-10 14:45:07 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:45:07.626614 | orchestrator | 2026-01-10 14:45:07 | INFO  | Task 2c603c40-4b51-4015-b9f3-187d335fe013 is in state STARTED 2026-01-10 14:45:07.626654 | orchestrator | 2026-01-10 14:45:07 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:10.676938 | orchestrator | 2026-01-10 14:45:10 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:45:10.678748 | orchestrator | 2026-01-10 14:45:10 | INFO  | Task 2c603c40-4b51-4015-b9f3-187d335fe013 is in state STARTED 2026-01-10 14:45:10.678823 | orchestrator | 2026-01-10 14:45:10 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:13.743058 | orchestrator | 2026-01-10 14:45:13 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:45:13.745722 | orchestrator | 2026-01-10 14:45:13 | INFO  | Task 2c603c40-4b51-4015-b9f3-187d335fe013 is in state STARTED 2026-01-10 14:45:13.745888 | orchestrator | 2026-01-10 14:45:13 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:16.784191 | orchestrator | 2026-01-10 14:45:16 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:45:16.786619 | orchestrator | 2026-01-10 14:45:16 | INFO  | Task 2c603c40-4b51-4015-b9f3-187d335fe013 is in state STARTED 2026-01-10 14:45:16.786687 | orchestrator | 2026-01-10 14:45:16 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:19.838320 | orchestrator | 2026-01-10 14:45:19 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:45:19.840002 | orchestrator | 2026-01-10 14:45:19 | INFO  | Task 2c603c40-4b51-4015-b9f3-187d335fe013 is in state STARTED 2026-01-10 14:45:19.840444 | orchestrator | 2026-01-10 14:45:19 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:22.898220 | orchestrator | 2026-01-10 14:45:22 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:45:22.899745 | orchestrator | 2026-01-10 14:45:22 | INFO  | Task 2c603c40-4b51-4015-b9f3-187d335fe013 is in state SUCCESS 2026-01-10 14:45:22.901527 | orchestrator | 2026-01-10 14:45:22 | INFO  | Task 13bf507d-c423-40e2-93bf-4f452de513b1 is in state STARTED 2026-01-10 14:45:22.901582 | orchestrator | 2026-01-10 14:45:22 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:25.951690 | orchestrator | 2026-01-10 14:45:25 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:45:25.952207 | orchestrator | 2026-01-10 14:45:25 | INFO  | Task 13bf507d-c423-40e2-93bf-4f452de513b1 is in state STARTED 2026-01-10 14:45:25.952359 | orchestrator | 2026-01-10 14:45:25 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:29.019214 | orchestrator | 2026-01-10 14:45:29 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:45:29.019797 | orchestrator | 2026-01-10 14:45:29 | INFO  | Task 13bf507d-c423-40e2-93bf-4f452de513b1 is in state STARTED 2026-01-10 14:45:29.020177 | orchestrator | 2026-01-10 14:45:29 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:32.072251 | orchestrator | 2026-01-10 14:45:32 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:45:32.073539 | orchestrator | 2026-01-10 14:45:32 | INFO  | Task 13bf507d-c423-40e2-93bf-4f452de513b1 is in state STARTED 2026-01-10 14:45:32.073595 | orchestrator | 2026-01-10 14:45:32 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:35.112187 | orchestrator | 2026-01-10 14:45:35 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:45:35.113370 | orchestrator | 2026-01-10 14:45:35 | INFO  | Task 13bf507d-c423-40e2-93bf-4f452de513b1 is in state STARTED 2026-01-10 14:45:35.114070 | orchestrator | 2026-01-10 14:45:35 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:38.143942 | orchestrator | 2026-01-10 14:45:38 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:45:38.146331 | orchestrator | 2026-01-10 14:45:38 | INFO  | Task 13bf507d-c423-40e2-93bf-4f452de513b1 is in state STARTED 2026-01-10 14:45:38.146544 | orchestrator | 2026-01-10 14:45:38 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:41.194863 | orchestrator | 2026-01-10 14:45:41 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:45:41.196933 | orchestrator | 2026-01-10 14:45:41 | INFO  | Task 13bf507d-c423-40e2-93bf-4f452de513b1 is in state STARTED 2026-01-10 14:45:41.197348 | orchestrator | 2026-01-10 14:45:41 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:44.238346 | orchestrator | 2026-01-10 14:45:44 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:45:44.239302 | orchestrator | 2026-01-10 14:45:44 | INFO  | Task 13bf507d-c423-40e2-93bf-4f452de513b1 is in state STARTED 2026-01-10 14:45:44.239373 | orchestrator | 2026-01-10 14:45:44 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:47.284942 | orchestrator | 2026-01-10 14:45:47 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:45:47.287346 | orchestrator | 2026-01-10 14:45:47 | INFO  | Task 13bf507d-c423-40e2-93bf-4f452de513b1 is in state STARTED 2026-01-10 14:45:47.287405 | orchestrator | 2026-01-10 14:45:47 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:50.327988 | orchestrator | 2026-01-10 14:45:50 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:45:50.329655 | orchestrator | 2026-01-10 14:45:50 | INFO  | Task 13bf507d-c423-40e2-93bf-4f452de513b1 is in state STARTED 2026-01-10 14:45:50.330328 | orchestrator | 2026-01-10 14:45:50 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:53.368143 | orchestrator | 2026-01-10 14:45:53 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:45:53.371638 | orchestrator | 2026-01-10 14:45:53 | INFO  | Task 13bf507d-c423-40e2-93bf-4f452de513b1 is in state STARTED 2026-01-10 14:45:53.371697 | orchestrator | 2026-01-10 14:45:53 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:56.418305 | orchestrator | 2026-01-10 14:45:56 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:45:56.420798 | orchestrator | 2026-01-10 14:45:56 | INFO  | Task 13bf507d-c423-40e2-93bf-4f452de513b1 is in state STARTED 2026-01-10 14:45:56.420856 | orchestrator | 2026-01-10 14:45:56 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:59.473113 | orchestrator | 2026-01-10 14:45:59 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:45:59.474792 | orchestrator | 2026-01-10 14:45:59 | INFO  | Task 13bf507d-c423-40e2-93bf-4f452de513b1 is in state STARTED 2026-01-10 14:45:59.474858 | orchestrator | 2026-01-10 14:45:59 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:02.516587 | orchestrator | 2026-01-10 14:46:02 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:46:02.518052 | orchestrator | 2026-01-10 14:46:02 | INFO  | Task 13bf507d-c423-40e2-93bf-4f452de513b1 is in state STARTED 2026-01-10 14:46:02.518098 | orchestrator | 2026-01-10 14:46:02 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:05.565861 | orchestrator | 2026-01-10 14:46:05 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state STARTED 2026-01-10 14:46:05.568763 | orchestrator | 2026-01-10 14:46:05 | INFO  | Task 13bf507d-c423-40e2-93bf-4f452de513b1 is in state STARTED 2026-01-10 14:46:05.568810 | orchestrator | 2026-01-10 14:46:05 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:08.595739 | orchestrator | 2026-01-10 14:46:08 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:46:08.596429 | orchestrator | 2026-01-10 14:46:08 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:46:08.598422 | orchestrator | 2026-01-10 14:46:08 | INFO  | Task 5121d459-9d9c-4578-ba7c-004bbaee22af is in state SUCCESS 2026-01-10 14:46:08.599363 | orchestrator | 2026-01-10 14:46:08.599391 | orchestrator | 2026-01-10 14:46:08.599399 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-01-10 14:46:08.599406 | orchestrator | 2026-01-10 14:46:08.599413 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-01-10 14:46:08.599420 | orchestrator | Saturday 10 January 2026 14:44:47 +0000 (0:00:00.156) 0:00:00.156 ****** 2026-01-10 14:46:08.599426 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-10 14:46:08.599434 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-10 14:46:08.599441 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-10 14:46:08.599447 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-10 14:46:08.599454 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-10 14:46:08.599538 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-10 14:46:08.599550 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-10 14:46:08.599556 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-10 14:46:08.599562 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-10 14:46:08.599568 | orchestrator | 2026-01-10 14:46:08.599574 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-01-10 14:46:08.599580 | orchestrator | Saturday 10 January 2026 14:44:51 +0000 (0:00:04.176) 0:00:04.332 ****** 2026-01-10 14:46:08.599587 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-10 14:46:08.599593 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-10 14:46:08.599599 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-10 14:46:08.599606 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-10 14:46:08.599612 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-10 14:46:08.599636 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-10 14:46:08.599643 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-10 14:46:08.599650 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-10 14:46:08.599656 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-10 14:46:08.599662 | orchestrator | 2026-01-10 14:46:08.599668 | orchestrator | TASK [Create share directory] ************************************************** 2026-01-10 14:46:08.599675 | orchestrator | Saturday 10 January 2026 14:44:55 +0000 (0:00:03.956) 0:00:08.289 ****** 2026-01-10 14:46:08.599682 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-10 14:46:08.599689 | orchestrator | 2026-01-10 14:46:08.599696 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-01-10 14:46:08.599702 | orchestrator | Saturday 10 January 2026 14:44:56 +0000 (0:00:01.029) 0:00:09.318 ****** 2026-01-10 14:46:08.599708 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-01-10 14:46:08.599715 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-10 14:46:08.599722 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-10 14:46:08.599944 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-01-10 14:46:08.599958 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-10 14:46:08.599964 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-01-10 14:46:08.599970 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-01-10 14:46:08.599976 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-01-10 14:46:08.599983 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-01-10 14:46:08.599989 | orchestrator | 2026-01-10 14:46:08.599995 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-01-10 14:46:08.600001 | orchestrator | Saturday 10 January 2026 14:45:10 +0000 (0:00:13.516) 0:00:22.834 ****** 2026-01-10 14:46:08.600007 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-01-10 14:46:08.600014 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-01-10 14:46:08.600020 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-10 14:46:08.600026 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-10 14:46:08.600055 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-10 14:46:08.600062 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-10 14:46:08.600069 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-01-10 14:46:08.600075 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-01-10 14:46:08.600082 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-01-10 14:46:08.600088 | orchestrator | 2026-01-10 14:46:08.600095 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-01-10 14:46:08.600101 | orchestrator | Saturday 10 January 2026 14:45:13 +0000 (0:00:03.144) 0:00:25.978 ****** 2026-01-10 14:46:08.600108 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-01-10 14:46:08.600114 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-10 14:46:08.600132 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-10 14:46:08.600139 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-01-10 14:46:08.600146 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-10 14:46:08.600152 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-01-10 14:46:08.600158 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-01-10 14:46:08.600164 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-01-10 14:46:08.600171 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-01-10 14:46:08.600177 | orchestrator | 2026-01-10 14:46:08.600183 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:46:08.600189 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:46:08.600196 | orchestrator | 2026-01-10 14:46:08.600203 | orchestrator | 2026-01-10 14:46:08.600209 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:46:08.600215 | orchestrator | Saturday 10 January 2026 14:45:20 +0000 (0:00:07.239) 0:00:33.218 ****** 2026-01-10 14:46:08.600221 | orchestrator | =============================================================================== 2026-01-10 14:46:08.600270 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.52s 2026-01-10 14:46:08.600277 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.24s 2026-01-10 14:46:08.600284 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.18s 2026-01-10 14:46:08.600290 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 3.96s 2026-01-10 14:46:08.600296 | orchestrator | Check if target directories exist --------------------------------------- 3.14s 2026-01-10 14:46:08.600350 | orchestrator | Create share directory -------------------------------------------------- 1.03s 2026-01-10 14:46:08.600357 | orchestrator | 2026-01-10 14:46:08.600363 | orchestrator | 2026-01-10 14:46:08.600369 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:46:08.600376 | orchestrator | 2026-01-10 14:46:08.600382 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:46:08.600389 | orchestrator | Saturday 10 January 2026 14:43:30 +0000 (0:00:00.258) 0:00:00.258 ****** 2026-01-10 14:46:08.600395 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:46:08.600401 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:46:08.600405 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:46:08.600408 | orchestrator | 2026-01-10 14:46:08.600412 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:46:08.600416 | orchestrator | Saturday 10 January 2026 14:43:31 +0000 (0:00:00.307) 0:00:00.566 ****** 2026-01-10 14:46:08.600419 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-01-10 14:46:08.600424 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-01-10 14:46:08.600427 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-01-10 14:46:08.600433 | orchestrator | 2026-01-10 14:46:08.600439 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-01-10 14:46:08.600445 | orchestrator | 2026-01-10 14:46:08.600451 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-10 14:46:08.600458 | orchestrator | Saturday 10 January 2026 14:43:31 +0000 (0:00:00.466) 0:00:01.033 ****** 2026-01-10 14:46:08.600465 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:46:08.600471 | orchestrator | 2026-01-10 14:46:08.600477 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-01-10 14:46:08.600484 | orchestrator | Saturday 10 January 2026 14:43:32 +0000 (0:00:00.594) 0:00:01.628 ****** 2026-01-10 14:46:08.600518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-10 14:46:08.600541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-10 14:46:08.600549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-10 14:46:08.600556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-10 14:46:08.600564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-10 14:46:08.600594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-10 14:46:08.600604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:46:08.600611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:46:08.600617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:46:08.600624 | orchestrator | 2026-01-10 14:46:08.600630 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-01-10 14:46:08.600637 | orchestrator | Saturday 10 January 2026 14:43:34 +0000 (0:00:02.085) 0:00:03.714 ****** 2026-01-10 14:46:08.600643 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:08.600650 | orchestrator | 2026-01-10 14:46:08.600656 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-01-10 14:46:08.600662 | orchestrator | Saturday 10 January 2026 14:43:34 +0000 (0:00:00.135) 0:00:03.849 ****** 2026-01-10 14:46:08.600668 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:08.600675 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:08.600681 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:08.600687 | orchestrator | 2026-01-10 14:46:08.600693 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-01-10 14:46:08.600700 | orchestrator | Saturday 10 January 2026 14:43:34 +0000 (0:00:00.453) 0:00:04.302 ****** 2026-01-10 14:46:08.600707 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:46:08.600713 | orchestrator | 2026-01-10 14:46:08.600719 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-10 14:46:08.600730 | orchestrator | Saturday 10 January 2026 14:43:35 +0000 (0:00:00.890) 0:00:05.193 ****** 2026-01-10 14:46:08.600737 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:46:08.600743 | orchestrator | 2026-01-10 14:46:08.600750 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-01-10 14:46:08.600756 | orchestrator | Saturday 10 January 2026 14:43:36 +0000 (0:00:00.566) 0:00:05.759 ****** 2026-01-10 14:46:08.600767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-10 14:46:08.600778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-10 14:46:08.600785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-10 14:46:08.600792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-10 14:46:08.600804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-10 14:46:08.600816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-10 14:46:08.600823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:46:08.600832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:46:08.600839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:46:08.600846 | orchestrator | 2026-01-10 14:46:08.600852 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-01-10 14:46:08.600859 | orchestrator | Saturday 10 January 2026 14:43:39 +0000 (0:00:03.471) 0:00:09.230 ****** 2026-01-10 14:46:08.600866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-10 14:46:08.600877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:46:08.600887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:46:08.600894 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:08.600903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-10 14:46:08.600911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:46:08.600917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:46:08.600931 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:08.600937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-10 14:46:08.600948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:46:08.600957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:46:08.600964 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:08.600970 | orchestrator | 2026-01-10 14:46:08.600976 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-01-10 14:46:08.600983 | orchestrator | Saturday 10 January 2026 14:43:40 +0000 (0:00:00.933) 0:00:10.164 ****** 2026-01-10 14:46:08.600990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-10 14:46:08.601001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:46:08.601008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:46:08.601015 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:08.601022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-10 14:46:08.601029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:46:08.601034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:46:08.601038 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:08.601043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-10 14:46:08.601050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:46:08.601054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:46:08.601059 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:08.601063 | orchestrator | 2026-01-10 14:46:08.601067 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-01-10 14:46:08.601074 | orchestrator | Saturday 10 January 2026 14:43:41 +0000 (0:00:00.780) 0:00:10.945 ****** 2026-01-10 14:46:08.601081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-10 14:46:08.601087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-10 14:46:08.601094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-10 14:46:08.601099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-10 14:46:08.601106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-10 14:46:08.601113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-10 14:46:08.601118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:46:08.601124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:46:08.601129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:46:08.601133 | orchestrator | 2026-01-10 14:46:08.601138 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-01-10 14:46:08.601143 | orchestrator | Saturday 10 January 2026 14:43:44 +0000 (0:00:03.389) 0:00:14.334 ****** 2026-01-10 14:46:08.601147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-10 14:46:08.601190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:46:08.601200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-10 14:46:08.601212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:46:08.601218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-10 14:46:08.601243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:46:08.601254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:46:08.601264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:46:08.601276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:46:08.601283 | orchestrator | 2026-01-10 14:46:08.601290 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-01-10 14:46:08.601297 | orchestrator | Saturday 10 January 2026 14:43:50 +0000 (0:00:05.684) 0:00:20.018 ****** 2026-01-10 14:46:08.601303 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:46:08.601310 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:46:08.601314 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:46:08.601318 | orchestrator | 2026-01-10 14:46:08.601324 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-01-10 14:46:08.601331 | orchestrator | Saturday 10 January 2026 14:43:52 +0000 (0:00:01.515) 0:00:21.534 ****** 2026-01-10 14:46:08.601337 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:08.601343 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:08.601349 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:08.601355 | orchestrator | 2026-01-10 14:46:08.601361 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-01-10 14:46:08.601367 | orchestrator | Saturday 10 January 2026 14:43:52 +0000 (0:00:00.527) 0:00:22.062 ****** 2026-01-10 14:46:08.601373 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:08.601379 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:08.601386 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:08.601392 | orchestrator | 2026-01-10 14:46:08.601398 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-01-10 14:46:08.601405 | orchestrator | Saturday 10 January 2026 14:43:52 +0000 (0:00:00.329) 0:00:22.391 ****** 2026-01-10 14:46:08.601411 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:08.601418 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:08.601425 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:08.601431 | orchestrator | 2026-01-10 14:46:08.601437 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-01-10 14:46:08.601443 | orchestrator | Saturday 10 January 2026 14:43:53 +0000 (0:00:00.512) 0:00:22.903 ****** 2026-01-10 14:46:08.601451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-10 14:46:08.601463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:46:08.601479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:46:08.601486 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:08.601493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-10 14:46:08.601501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:46:08.601508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:46:08.601515 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:08.601525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-10 14:46:08.601538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:46:08.601546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:46:08.601552 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:08.601559 | orchestrator | 2026-01-10 14:46:08.601565 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-10 14:46:08.601572 | orchestrator | Saturday 10 January 2026 14:43:54 +0000 (0:00:00.819) 0:00:23.723 ****** 2026-01-10 14:46:08.601579 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:08.601585 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:08.601592 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:08.601598 | orchestrator | 2026-01-10 14:46:08.601605 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-01-10 14:46:08.601611 | orchestrator | Saturday 10 January 2026 14:43:54 +0000 (0:00:00.302) 0:00:24.025 ****** 2026-01-10 14:46:08.601617 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-10 14:46:08.601624 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-10 14:46:08.601631 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-10 14:46:08.601638 | orchestrator | 2026-01-10 14:46:08.601645 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-01-10 14:46:08.601652 | orchestrator | Saturday 10 January 2026 14:43:56 +0000 (0:00:01.523) 0:00:25.549 ****** 2026-01-10 14:46:08.601659 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:46:08.601665 | orchestrator | 2026-01-10 14:46:08.601672 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-01-10 14:46:08.601678 | orchestrator | Saturday 10 January 2026 14:43:56 +0000 (0:00:00.959) 0:00:26.509 ****** 2026-01-10 14:46:08.601685 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:08.601691 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:08.601697 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:08.601704 | orchestrator | 2026-01-10 14:46:08.601710 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-01-10 14:46:08.601716 | orchestrator | Saturday 10 January 2026 14:43:57 +0000 (0:00:00.843) 0:00:27.352 ****** 2026-01-10 14:46:08.601723 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-10 14:46:08.601734 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-10 14:46:08.601740 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:46:08.601747 | orchestrator | 2026-01-10 14:46:08.601754 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-01-10 14:46:08.601760 | orchestrator | Saturday 10 January 2026 14:43:58 +0000 (0:00:01.121) 0:00:28.474 ****** 2026-01-10 14:46:08.601767 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:46:08.601773 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:46:08.601780 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:46:08.601786 | orchestrator | 2026-01-10 14:46:08.601792 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-01-10 14:46:08.601799 | orchestrator | Saturday 10 January 2026 14:43:59 +0000 (0:00:00.412) 0:00:28.887 ****** 2026-01-10 14:46:08.601805 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-10 14:46:08.601812 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-10 14:46:08.601818 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-10 14:46:08.601824 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-10 14:46:08.601838 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-10 14:46:08.601846 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-10 14:46:08.601853 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-10 14:46:08.601860 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-10 14:46:08.601867 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-10 14:46:08.601873 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-10 14:46:08.601880 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-10 14:46:08.601886 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-10 14:46:08.601896 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-10 14:46:08.601903 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-10 14:46:08.601909 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-10 14:46:08.601915 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-10 14:46:08.601921 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-10 14:46:08.601928 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-10 14:46:08.601935 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-10 14:46:08.601940 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-10 14:46:08.601944 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-10 14:46:08.601948 | orchestrator | 2026-01-10 14:46:08.601953 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-01-10 14:46:08.601959 | orchestrator | Saturday 10 January 2026 14:44:08 +0000 (0:00:08.974) 0:00:37.861 ****** 2026-01-10 14:46:08.601965 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-10 14:46:08.601975 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-10 14:46:08.601982 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-10 14:46:08.601993 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-10 14:46:08.601999 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-10 14:46:08.602004 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-10 14:46:08.602010 | orchestrator | 2026-01-10 14:46:08.602051 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-01-10 14:46:08.602058 | orchestrator | Saturday 10 January 2026 14:44:11 +0000 (0:00:02.875) 0:00:40.737 ****** 2026-01-10 14:46:08.602065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-10 14:46:08.602078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-10 14:46:08.602090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-10 14:46:08.602096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-10 14:46:08.602108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-10 14:46:08.602115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-10 14:46:08.602121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:46:08.602131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:46:08.602141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:46:08.602147 | orchestrator | 2026-01-10 14:46:08.602154 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-10 14:46:08.602161 | orchestrator | Saturday 10 January 2026 14:44:13 +0000 (0:00:02.402) 0:00:43.139 ****** 2026-01-10 14:46:08.602167 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:08.602174 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:08.602185 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:08.602192 | orchestrator | 2026-01-10 14:46:08.602196 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-01-10 14:46:08.602199 | orchestrator | Saturday 10 January 2026 14:44:13 +0000 (0:00:00.329) 0:00:43.468 ****** 2026-01-10 14:46:08.602203 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:46:08.602207 | orchestrator | 2026-01-10 14:46:08.602210 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-01-10 14:46:08.602214 | orchestrator | Saturday 10 January 2026 14:44:16 +0000 (0:00:02.625) 0:00:46.094 ****** 2026-01-10 14:46:08.602218 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:46:08.602252 | orchestrator | 2026-01-10 14:46:08.602260 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-01-10 14:46:08.602264 | orchestrator | Saturday 10 January 2026 14:44:19 +0000 (0:00:02.476) 0:00:48.571 ****** 2026-01-10 14:46:08.602268 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:46:08.602272 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:46:08.602276 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:46:08.602279 | orchestrator | 2026-01-10 14:46:08.602283 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-01-10 14:46:08.602287 | orchestrator | Saturday 10 January 2026 14:44:20 +0000 (0:00:01.206) 0:00:49.778 ****** 2026-01-10 14:46:08.602291 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:46:08.602295 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:46:08.602299 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:46:08.602303 | orchestrator | 2026-01-10 14:46:08.602307 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-01-10 14:46:08.602311 | orchestrator | Saturday 10 January 2026 14:44:20 +0000 (0:00:00.317) 0:00:50.095 ****** 2026-01-10 14:46:08.602315 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:08.602319 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:08.602323 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:08.602326 | orchestrator | 2026-01-10 14:46:08.602332 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-01-10 14:46:08.602339 | orchestrator | Saturday 10 January 2026 14:44:20 +0000 (0:00:00.334) 0:00:50.429 ****** 2026-01-10 14:46:08.602345 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:46:08.602349 | orchestrator | 2026-01-10 14:46:08.602352 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-01-10 14:46:08.602356 | orchestrator | Saturday 10 January 2026 14:44:35 +0000 (0:00:14.755) 0:01:05.185 ****** 2026-01-10 14:46:08.602360 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:46:08.602364 | orchestrator | 2026-01-10 14:46:08.602369 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-10 14:46:08.602376 | orchestrator | Saturday 10 January 2026 14:44:47 +0000 (0:00:11.625) 0:01:16.810 ****** 2026-01-10 14:46:08.602381 | orchestrator | 2026-01-10 14:46:08.602385 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-10 14:46:08.602388 | orchestrator | Saturday 10 January 2026 14:44:47 +0000 (0:00:00.064) 0:01:16.874 ****** 2026-01-10 14:46:08.602392 | orchestrator | 2026-01-10 14:46:08.602396 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-10 14:46:08.602401 | orchestrator | Saturday 10 January 2026 14:44:47 +0000 (0:00:00.065) 0:01:16.940 ****** 2026-01-10 14:46:08.602407 | orchestrator | 2026-01-10 14:46:08.602413 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-01-10 14:46:08.602417 | orchestrator | Saturday 10 January 2026 14:44:47 +0000 (0:00:00.062) 0:01:17.003 ****** 2026-01-10 14:46:08.602421 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:46:08.602425 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:46:08.602432 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:46:08.602439 | orchestrator | 2026-01-10 14:46:08.602445 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-01-10 14:46:08.602456 | orchestrator | Saturday 10 January 2026 14:44:56 +0000 (0:00:08.860) 0:01:25.864 ****** 2026-01-10 14:46:08.602469 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:46:08.602476 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:46:08.602482 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:46:08.602488 | orchestrator | 2026-01-10 14:46:08.602499 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-01-10 14:46:08.602506 | orchestrator | Saturday 10 January 2026 14:45:03 +0000 (0:00:07.422) 0:01:33.286 ****** 2026-01-10 14:46:08.602511 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:46:08.602518 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:46:08.602523 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:46:08.602530 | orchestrator | 2026-01-10 14:46:08.602536 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-10 14:46:08.602542 | orchestrator | Saturday 10 January 2026 14:45:09 +0000 (0:00:05.727) 0:01:39.014 ****** 2026-01-10 14:46:08.602548 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:46:08.602554 | orchestrator | 2026-01-10 14:46:08.602559 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-01-10 14:46:08.602564 | orchestrator | Saturday 10 January 2026 14:45:10 +0000 (0:00:00.763) 0:01:39.777 ****** 2026-01-10 14:46:08.602570 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:46:08.602576 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:46:08.602582 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:46:08.602588 | orchestrator | 2026-01-10 14:46:08.602599 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-01-10 14:46:08.602605 | orchestrator | Saturday 10 January 2026 14:45:11 +0000 (0:00:00.864) 0:01:40.641 ****** 2026-01-10 14:46:08.602612 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:46:08.602618 | orchestrator | 2026-01-10 14:46:08.602624 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-01-10 14:46:08.602630 | orchestrator | Saturday 10 January 2026 14:45:12 +0000 (0:00:01.770) 0:01:42.411 ****** 2026-01-10 14:46:08.602636 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-01-10 14:46:08.602643 | orchestrator | 2026-01-10 14:46:08.602649 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-01-10 14:46:08.602655 | orchestrator | Saturday 10 January 2026 14:45:26 +0000 (0:00:13.532) 0:01:55.944 ****** 2026-01-10 14:46:08.602661 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-01-10 14:46:08.602666 | orchestrator | 2026-01-10 14:46:08.602672 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-01-10 14:46:08.602677 | orchestrator | Saturday 10 January 2026 14:45:54 +0000 (0:00:28.358) 0:02:24.302 ****** 2026-01-10 14:46:08.602683 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-01-10 14:46:08.602689 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-01-10 14:46:08.602695 | orchestrator | 2026-01-10 14:46:08.602701 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-01-10 14:46:08.602708 | orchestrator | Saturday 10 January 2026 14:46:01 +0000 (0:00:06.403) 0:02:30.705 ****** 2026-01-10 14:46:08.602715 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:08.602720 | orchestrator | 2026-01-10 14:46:08.602727 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-01-10 14:46:08.602733 | orchestrator | Saturday 10 January 2026 14:46:01 +0000 (0:00:00.139) 0:02:30.845 ****** 2026-01-10 14:46:08.602740 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:08.602747 | orchestrator | 2026-01-10 14:46:08.602753 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-01-10 14:46:08.602759 | orchestrator | Saturday 10 January 2026 14:46:01 +0000 (0:00:00.121) 0:02:30.966 ****** 2026-01-10 14:46:08.602765 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:08.602771 | orchestrator | 2026-01-10 14:46:08.602776 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-01-10 14:46:08.602788 | orchestrator | Saturday 10 January 2026 14:46:01 +0000 (0:00:00.129) 0:02:31.096 ****** 2026-01-10 14:46:08.602794 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:08.602800 | orchestrator | 2026-01-10 14:46:08.602806 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-01-10 14:46:08.602812 | orchestrator | Saturday 10 January 2026 14:46:02 +0000 (0:00:00.527) 0:02:31.623 ****** 2026-01-10 14:46:08.602818 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:46:08.602825 | orchestrator | 2026-01-10 14:46:08.602831 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-10 14:46:08.602837 | orchestrator | Saturday 10 January 2026 14:46:05 +0000 (0:00:03.020) 0:02:34.643 ****** 2026-01-10 14:46:08.602844 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:08.602850 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:08.602856 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:08.602864 | orchestrator | 2026-01-10 14:46:08.602870 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:46:08.602876 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-10 14:46:08.602883 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-10 14:46:08.602889 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-10 14:46:08.602896 | orchestrator | 2026-01-10 14:46:08.602903 | orchestrator | 2026-01-10 14:46:08.602910 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:46:08.602916 | orchestrator | Saturday 10 January 2026 14:46:05 +0000 (0:00:00.442) 0:02:35.085 ****** 2026-01-10 14:46:08.602922 | orchestrator | =============================================================================== 2026-01-10 14:46:08.602929 | orchestrator | service-ks-register : keystone | Creating services --------------------- 28.36s 2026-01-10 14:46:08.602936 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.76s 2026-01-10 14:46:08.602950 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 13.53s 2026-01-10 14:46:08.602957 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.63s 2026-01-10 14:46:08.602964 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.97s 2026-01-10 14:46:08.602970 | orchestrator | keystone : Restart keystone-ssh container ------------------------------- 8.86s 2026-01-10 14:46:08.602976 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 7.42s 2026-01-10 14:46:08.602982 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.40s 2026-01-10 14:46:08.602989 | orchestrator | keystone : Restart keystone container ----------------------------------- 5.73s 2026-01-10 14:46:08.602995 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.68s 2026-01-10 14:46:08.603001 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.47s 2026-01-10 14:46:08.603007 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.39s 2026-01-10 14:46:08.603018 | orchestrator | keystone : Creating default user role ----------------------------------- 3.02s 2026-01-10 14:46:08.603024 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.88s 2026-01-10 14:46:08.603030 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.63s 2026-01-10 14:46:08.603036 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.48s 2026-01-10 14:46:08.603043 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.40s 2026-01-10 14:46:08.603050 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.09s 2026-01-10 14:46:08.603057 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.77s 2026-01-10 14:46:08.603069 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.52s 2026-01-10 14:46:08.603117 | orchestrator | 2026-01-10 14:46:08 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:46:08.603123 | orchestrator | 2026-01-10 14:46:08 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:46:08.603127 | orchestrator | 2026-01-10 14:46:08 | INFO  | Task 13bf507d-c423-40e2-93bf-4f452de513b1 is in state STARTED 2026-01-10 14:46:08.603132 | orchestrator | 2026-01-10 14:46:08 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:11.630891 | orchestrator | 2026-01-10 14:46:11 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:46:11.630940 | orchestrator | 2026-01-10 14:46:11 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:46:11.630946 | orchestrator | 2026-01-10 14:46:11 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:46:11.630950 | orchestrator | 2026-01-10 14:46:11 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:46:11.630954 | orchestrator | 2026-01-10 14:46:11 | INFO  | Task 13bf507d-c423-40e2-93bf-4f452de513b1 is in state STARTED 2026-01-10 14:46:11.630958 | orchestrator | 2026-01-10 14:46:11 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:14.665509 | orchestrator | 2026-01-10 14:46:14 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:46:14.667503 | orchestrator | 2026-01-10 14:46:14 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:46:14.669333 | orchestrator | 2026-01-10 14:46:14 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:46:14.670571 | orchestrator | 2026-01-10 14:46:14 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:46:14.671918 | orchestrator | 2026-01-10 14:46:14 | INFO  | Task 13bf507d-c423-40e2-93bf-4f452de513b1 is in state STARTED 2026-01-10 14:46:14.672022 | orchestrator | 2026-01-10 14:46:14 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:17.722374 | orchestrator | 2026-01-10 14:46:17 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:46:17.723482 | orchestrator | 2026-01-10 14:46:17 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:46:17.725258 | orchestrator | 2026-01-10 14:46:17 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:46:17.726665 | orchestrator | 2026-01-10 14:46:17 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:46:17.727724 | orchestrator | 2026-01-10 14:46:17 | INFO  | Task 13bf507d-c423-40e2-93bf-4f452de513b1 is in state STARTED 2026-01-10 14:46:17.727840 | orchestrator | 2026-01-10 14:46:17 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:20.785104 | orchestrator | 2026-01-10 14:46:20 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:46:20.785186 | orchestrator | 2026-01-10 14:46:20 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:46:20.785195 | orchestrator | 2026-01-10 14:46:20 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:46:20.785202 | orchestrator | 2026-01-10 14:46:20 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:46:20.786725 | orchestrator | 2026-01-10 14:46:20 | INFO  | Task 13bf507d-c423-40e2-93bf-4f452de513b1 is in state SUCCESS 2026-01-10 14:46:20.786761 | orchestrator | 2026-01-10 14:46:20 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:23.824977 | orchestrator | 2026-01-10 14:46:23 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:46:23.826711 | orchestrator | 2026-01-10 14:46:23 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:46:23.828645 | orchestrator | 2026-01-10 14:46:23 | INFO  | Task 7b3342ef-d4e5-4a54-866f-2872fb59443d is in state STARTED 2026-01-10 14:46:23.830177 | orchestrator | 2026-01-10 14:46:23 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:46:23.831515 | orchestrator | 2026-01-10 14:46:23 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:46:23.831561 | orchestrator | 2026-01-10 14:46:23 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:26.866648 | orchestrator | 2026-01-10 14:46:26 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:46:26.867762 | orchestrator | 2026-01-10 14:46:26 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:46:26.869352 | orchestrator | 2026-01-10 14:46:26 | INFO  | Task 7b3342ef-d4e5-4a54-866f-2872fb59443d is in state STARTED 2026-01-10 14:46:26.871011 | orchestrator | 2026-01-10 14:46:26 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:46:26.873632 | orchestrator | 2026-01-10 14:46:26 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:46:26.873846 | orchestrator | 2026-01-10 14:46:26 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:29.929979 | orchestrator | 2026-01-10 14:46:29 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:46:29.932973 | orchestrator | 2026-01-10 14:46:29 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:46:29.935869 | orchestrator | 2026-01-10 14:46:29 | INFO  | Task 7b3342ef-d4e5-4a54-866f-2872fb59443d is in state STARTED 2026-01-10 14:46:29.937795 | orchestrator | 2026-01-10 14:46:29 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:46:29.939427 | orchestrator | 2026-01-10 14:46:29 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:46:29.939482 | orchestrator | 2026-01-10 14:46:29 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:32.983126 | orchestrator | 2026-01-10 14:46:32 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:46:32.984747 | orchestrator | 2026-01-10 14:46:32 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:46:32.986289 | orchestrator | 2026-01-10 14:46:32 | INFO  | Task 7b3342ef-d4e5-4a54-866f-2872fb59443d is in state STARTED 2026-01-10 14:46:32.987458 | orchestrator | 2026-01-10 14:46:32 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:46:32.988584 | orchestrator | 2026-01-10 14:46:32 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:46:32.988808 | orchestrator | 2026-01-10 14:46:32 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:36.042842 | orchestrator | 2026-01-10 14:46:36 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:46:36.044388 | orchestrator | 2026-01-10 14:46:36 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:46:36.046462 | orchestrator | 2026-01-10 14:46:36 | INFO  | Task 7b3342ef-d4e5-4a54-866f-2872fb59443d is in state STARTED 2026-01-10 14:46:36.048225 | orchestrator | 2026-01-10 14:46:36 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:46:36.050038 | orchestrator | 2026-01-10 14:46:36 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:46:36.050071 | orchestrator | 2026-01-10 14:46:36 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:39.096629 | orchestrator | 2026-01-10 14:46:39 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:46:39.097733 | orchestrator | 2026-01-10 14:46:39 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:46:39.101485 | orchestrator | 2026-01-10 14:46:39 | INFO  | Task 7b3342ef-d4e5-4a54-866f-2872fb59443d is in state STARTED 2026-01-10 14:46:39.103618 | orchestrator | 2026-01-10 14:46:39 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:46:39.104918 | orchestrator | 2026-01-10 14:46:39 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:46:39.105428 | orchestrator | 2026-01-10 14:46:39 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:42.141172 | orchestrator | 2026-01-10 14:46:42 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:46:42.141631 | orchestrator | 2026-01-10 14:46:42 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:46:42.141931 | orchestrator | 2026-01-10 14:46:42 | INFO  | Task 7b3342ef-d4e5-4a54-866f-2872fb59443d is in state STARTED 2026-01-10 14:46:42.142721 | orchestrator | 2026-01-10 14:46:42 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:46:42.143512 | orchestrator | 2026-01-10 14:46:42 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:46:42.143544 | orchestrator | 2026-01-10 14:46:42 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:45.186771 | orchestrator | 2026-01-10 14:46:45 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:46:45.187524 | orchestrator | 2026-01-10 14:46:45 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:46:45.188776 | orchestrator | 2026-01-10 14:46:45 | INFO  | Task 7b3342ef-d4e5-4a54-866f-2872fb59443d is in state STARTED 2026-01-10 14:46:45.190216 | orchestrator | 2026-01-10 14:46:45 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:46:45.193683 | orchestrator | 2026-01-10 14:46:45 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:46:45.193726 | orchestrator | 2026-01-10 14:46:45 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:48.226411 | orchestrator | 2026-01-10 14:46:48 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:46:48.228244 | orchestrator | 2026-01-10 14:46:48 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:46:48.229507 | orchestrator | 2026-01-10 14:46:48 | INFO  | Task 7b3342ef-d4e5-4a54-866f-2872fb59443d is in state STARTED 2026-01-10 14:46:48.230850 | orchestrator | 2026-01-10 14:46:48 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:46:48.232265 | orchestrator | 2026-01-10 14:46:48 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:46:48.232392 | orchestrator | 2026-01-10 14:46:48 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:51.260422 | orchestrator | 2026-01-10 14:46:51 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:46:51.260974 | orchestrator | 2026-01-10 14:46:51 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:46:51.261655 | orchestrator | 2026-01-10 14:46:51 | INFO  | Task 7b3342ef-d4e5-4a54-866f-2872fb59443d is in state STARTED 2026-01-10 14:46:51.263039 | orchestrator | 2026-01-10 14:46:51 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:46:51.263801 | orchestrator | 2026-01-10 14:46:51 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:46:51.263827 | orchestrator | 2026-01-10 14:46:51 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:54.299575 | orchestrator | 2026-01-10 14:46:54 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:46:54.299871 | orchestrator | 2026-01-10 14:46:54 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:46:54.300648 | orchestrator | 2026-01-10 14:46:54 | INFO  | Task 7b3342ef-d4e5-4a54-866f-2872fb59443d is in state STARTED 2026-01-10 14:46:54.305771 | orchestrator | 2026-01-10 14:46:54 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:46:54.305815 | orchestrator | 2026-01-10 14:46:54 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:46:54.305823 | orchestrator | 2026-01-10 14:46:54 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:57.352841 | orchestrator | 2026-01-10 14:46:57 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:46:57.354645 | orchestrator | 2026-01-10 14:46:57 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:46:57.355017 | orchestrator | 2026-01-10 14:46:57 | INFO  | Task 7b3342ef-d4e5-4a54-866f-2872fb59443d is in state STARTED 2026-01-10 14:46:57.355852 | orchestrator | 2026-01-10 14:46:57 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:46:57.357770 | orchestrator | 2026-01-10 14:46:57 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:46:57.357811 | orchestrator | 2026-01-10 14:46:57 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:00.399665 | orchestrator | 2026-01-10 14:47:00 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:47:00.400422 | orchestrator | 2026-01-10 14:47:00 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:47:00.401674 | orchestrator | 2026-01-10 14:47:00 | INFO  | Task 7b3342ef-d4e5-4a54-866f-2872fb59443d is in state STARTED 2026-01-10 14:47:00.403549 | orchestrator | 2026-01-10 14:47:00 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:47:00.403958 | orchestrator | 2026-01-10 14:47:00 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:47:00.404033 | orchestrator | 2026-01-10 14:47:00 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:03.428859 | orchestrator | 2026-01-10 14:47:03 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:47:03.429057 | orchestrator | 2026-01-10 14:47:03 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:47:03.429688 | orchestrator | 2026-01-10 14:47:03 | INFO  | Task 7b3342ef-d4e5-4a54-866f-2872fb59443d is in state STARTED 2026-01-10 14:47:03.430306 | orchestrator | 2026-01-10 14:47:03 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:47:03.430790 | orchestrator | 2026-01-10 14:47:03 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:47:03.430826 | orchestrator | 2026-01-10 14:47:03 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:06.452541 | orchestrator | 2026-01-10 14:47:06 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:47:06.452657 | orchestrator | 2026-01-10 14:47:06 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:47:06.452667 | orchestrator | 2026-01-10 14:47:06 | INFO  | Task 7b3342ef-d4e5-4a54-866f-2872fb59443d is in state STARTED 2026-01-10 14:47:06.452674 | orchestrator | 2026-01-10 14:47:06 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:47:06.453109 | orchestrator | 2026-01-10 14:47:06 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:47:06.455787 | orchestrator | 2026-01-10 14:47:06 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:09.481785 | orchestrator | 2026-01-10 14:47:09 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:47:09.482761 | orchestrator | 2026-01-10 14:47:09 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:47:09.483395 | orchestrator | 2026-01-10 14:47:09 | INFO  | Task 7b3342ef-d4e5-4a54-866f-2872fb59443d is in state STARTED 2026-01-10 14:47:09.484334 | orchestrator | 2026-01-10 14:47:09 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:47:09.484894 | orchestrator | 2026-01-10 14:47:09 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:47:09.484917 | orchestrator | 2026-01-10 14:47:09 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:12.511710 | orchestrator | 2026-01-10 14:47:12 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:47:12.611654 | orchestrator | 2026-01-10 14:47:12 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:47:12.611683 | orchestrator | 2026-01-10 14:47:12 | INFO  | Task 7b3342ef-d4e5-4a54-866f-2872fb59443d is in state STARTED 2026-01-10 14:47:12.611688 | orchestrator | 2026-01-10 14:47:12 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:47:12.611699 | orchestrator | 2026-01-10 14:47:12 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:47:12.611708 | orchestrator | 2026-01-10 14:47:12 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:15.551839 | orchestrator | 2026-01-10 14:47:15 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:47:15.552100 | orchestrator | 2026-01-10 14:47:15 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:47:15.552786 | orchestrator | 2026-01-10 14:47:15 | INFO  | Task 7b3342ef-d4e5-4a54-866f-2872fb59443d is in state STARTED 2026-01-10 14:47:15.553352 | orchestrator | 2026-01-10 14:47:15 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:47:15.553947 | orchestrator | 2026-01-10 14:47:15 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:47:15.553998 | orchestrator | 2026-01-10 14:47:15 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:18.576752 | orchestrator | 2026-01-10 14:47:18 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:47:18.577910 | orchestrator | 2026-01-10 14:47:18 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:47:18.579325 | orchestrator | 2026-01-10 14:47:18 | INFO  | Task 7b3342ef-d4e5-4a54-866f-2872fb59443d is in state STARTED 2026-01-10 14:47:18.580454 | orchestrator | 2026-01-10 14:47:18 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:47:18.581654 | orchestrator | 2026-01-10 14:47:18 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:47:18.581722 | orchestrator | 2026-01-10 14:47:18 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:21.609552 | orchestrator | 2026-01-10 14:47:21 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:47:21.609902 | orchestrator | 2026-01-10 14:47:21 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:47:21.611022 | orchestrator | 2026-01-10 14:47:21 | INFO  | Task 7b3342ef-d4e5-4a54-866f-2872fb59443d is in state STARTED 2026-01-10 14:47:21.612098 | orchestrator | 2026-01-10 14:47:21 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:47:21.612908 | orchestrator | 2026-01-10 14:47:21 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:47:21.613435 | orchestrator | 2026-01-10 14:47:21 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:24.645640 | orchestrator | 2026-01-10 14:47:24 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:47:24.645884 | orchestrator | 2026-01-10 14:47:24 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:47:24.646901 | orchestrator | 2026-01-10 14:47:24 | INFO  | Task 7b3342ef-d4e5-4a54-866f-2872fb59443d is in state STARTED 2026-01-10 14:47:24.647317 | orchestrator | 2026-01-10 14:47:24 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:47:24.648997 | orchestrator | 2026-01-10 14:47:24 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:47:24.649040 | orchestrator | 2026-01-10 14:47:24 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:27.687155 | orchestrator | 2026-01-10 14:47:27 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:47:27.688621 | orchestrator | 2026-01-10 14:47:27 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:47:27.689650 | orchestrator | 2026-01-10 14:47:27 | INFO  | Task 7b3342ef-d4e5-4a54-866f-2872fb59443d is in state STARTED 2026-01-10 14:47:27.690567 | orchestrator | 2026-01-10 14:47:27 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:47:27.691467 | orchestrator | 2026-01-10 14:47:27 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:47:27.691504 | orchestrator | 2026-01-10 14:47:27 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:30.725314 | orchestrator | 2026-01-10 14:47:30 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:47:30.726576 | orchestrator | 2026-01-10 14:47:30 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:47:30.728961 | orchestrator | 2026-01-10 14:47:30 | INFO  | Task 7b3342ef-d4e5-4a54-866f-2872fb59443d is in state STARTED 2026-01-10 14:47:30.730549 | orchestrator | 2026-01-10 14:47:30 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:47:30.733381 | orchestrator | 2026-01-10 14:47:30 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:47:30.733966 | orchestrator | 2026-01-10 14:47:30 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:33.766325 | orchestrator | 2026-01-10 14:47:33 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:47:33.766492 | orchestrator | 2026-01-10 14:47:33 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:47:33.768355 | orchestrator | 2026-01-10 14:47:33 | INFO  | Task 7b3342ef-d4e5-4a54-866f-2872fb59443d is in state STARTED 2026-01-10 14:47:33.770878 | orchestrator | 2026-01-10 14:47:33 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:47:33.773111 | orchestrator | 2026-01-10 14:47:33 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:47:33.773187 | orchestrator | 2026-01-10 14:47:33 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:36.798941 | orchestrator | 2026-01-10 14:47:36 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:47:36.799672 | orchestrator | 2026-01-10 14:47:36 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:47:36.800639 | orchestrator | 2026-01-10 14:47:36 | INFO  | Task 7b3342ef-d4e5-4a54-866f-2872fb59443d is in state STARTED 2026-01-10 14:47:36.801660 | orchestrator | 2026-01-10 14:47:36 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:47:36.802705 | orchestrator | 2026-01-10 14:47:36 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:47:36.802812 | orchestrator | 2026-01-10 14:47:36 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:39.845783 | orchestrator | 2026-01-10 14:47:39 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:47:39.846182 | orchestrator | 2026-01-10 14:47:39 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:47:39.847104 | orchestrator | 2026-01-10 14:47:39 | INFO  | Task 7b3342ef-d4e5-4a54-866f-2872fb59443d is in state STARTED 2026-01-10 14:47:39.847847 | orchestrator | 2026-01-10 14:47:39 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:47:39.848721 | orchestrator | 2026-01-10 14:47:39 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:47:39.848755 | orchestrator | 2026-01-10 14:47:39 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:42.895862 | orchestrator | 2026-01-10 14:47:42 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:47:42.896549 | orchestrator | 2026-01-10 14:47:42 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:47:42.897290 | orchestrator | 2026-01-10 14:47:42 | INFO  | Task 7b3342ef-d4e5-4a54-866f-2872fb59443d is in state STARTED 2026-01-10 14:47:42.898318 | orchestrator | 2026-01-10 14:47:42 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:47:42.899057 | orchestrator | 2026-01-10 14:47:42 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:47:42.899082 | orchestrator | 2026-01-10 14:47:42 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:45.966819 | orchestrator | 2026-01-10 14:47:45 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:47:45.967568 | orchestrator | 2026-01-10 14:47:45 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:47:45.971788 | orchestrator | 2026-01-10 14:47:45 | INFO  | Task 7b3342ef-d4e5-4a54-866f-2872fb59443d is in state STARTED 2026-01-10 14:47:45.971831 | orchestrator | 2026-01-10 14:47:45 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:47:45.971838 | orchestrator | 2026-01-10 14:47:45 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:47:45.971844 | orchestrator | 2026-01-10 14:47:45 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:49.027000 | orchestrator | 2026-01-10 14:47:49 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:47:49.027078 | orchestrator | 2026-01-10 14:47:49 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:47:49.027089 | orchestrator | 2026-01-10 14:47:49 | INFO  | Task 7b3342ef-d4e5-4a54-866f-2872fb59443d is in state STARTED 2026-01-10 14:47:49.027095 | orchestrator | 2026-01-10 14:47:49 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:47:49.027101 | orchestrator | 2026-01-10 14:47:49 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:47:49.027120 | orchestrator | 2026-01-10 14:47:49 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:52.062394 | orchestrator | 2026-01-10 14:47:52 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:47:52.062456 | orchestrator | 2026-01-10 14:47:52 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:47:52.062466 | orchestrator | 2026-01-10 14:47:52 | INFO  | Task 7b3342ef-d4e5-4a54-866f-2872fb59443d is in state STARTED 2026-01-10 14:47:52.062472 | orchestrator | 2026-01-10 14:47:52 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:47:52.062492 | orchestrator | 2026-01-10 14:47:52 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:47:52.062499 | orchestrator | 2026-01-10 14:47:52 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:55.088984 | orchestrator | 2026-01-10 14:47:55 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:47:55.089747 | orchestrator | 2026-01-10 14:47:55 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:47:55.090398 | orchestrator | 2026-01-10 14:47:55 | INFO  | Task 7b3342ef-d4e5-4a54-866f-2872fb59443d is in state SUCCESS 2026-01-10 14:47:55.090747 | orchestrator | 2026-01-10 14:47:55.090769 | orchestrator | 2026-01-10 14:47:55.090778 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-01-10 14:47:55.090785 | orchestrator | 2026-01-10 14:47:55.090793 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-01-10 14:47:55.090800 | orchestrator | Saturday 10 January 2026 14:45:25 +0000 (0:00:00.238) 0:00:00.238 ****** 2026-01-10 14:47:55.090807 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-01-10 14:47:55.090814 | orchestrator | 2026-01-10 14:47:55.090822 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-01-10 14:47:55.090828 | orchestrator | Saturday 10 January 2026 14:45:25 +0000 (0:00:00.236) 0:00:00.475 ****** 2026-01-10 14:47:55.090836 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-01-10 14:47:55.090843 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-01-10 14:47:55.090850 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-01-10 14:47:55.090857 | orchestrator | 2026-01-10 14:47:55.090864 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-01-10 14:47:55.090871 | orchestrator | Saturday 10 January 2026 14:45:26 +0000 (0:00:01.340) 0:00:01.815 ****** 2026-01-10 14:47:55.090877 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-01-10 14:47:55.090883 | orchestrator | 2026-01-10 14:47:55.090890 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-01-10 14:47:55.090896 | orchestrator | Saturday 10 January 2026 14:45:28 +0000 (0:00:01.579) 0:00:03.395 ****** 2026-01-10 14:47:55.090903 | orchestrator | changed: [testbed-manager] 2026-01-10 14:47:55.090910 | orchestrator | 2026-01-10 14:47:55.090916 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-01-10 14:47:55.090923 | orchestrator | Saturday 10 January 2026 14:45:29 +0000 (0:00:00.968) 0:00:04.363 ****** 2026-01-10 14:47:55.090945 | orchestrator | changed: [testbed-manager] 2026-01-10 14:47:55.090952 | orchestrator | 2026-01-10 14:47:55.090958 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-01-10 14:47:55.090965 | orchestrator | Saturday 10 January 2026 14:45:30 +0000 (0:00:00.947) 0:00:05.311 ****** 2026-01-10 14:47:55.090971 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-01-10 14:47:55.090977 | orchestrator | ok: [testbed-manager] 2026-01-10 14:47:55.090984 | orchestrator | 2026-01-10 14:47:55.090990 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-01-10 14:47:55.090996 | orchestrator | Saturday 10 January 2026 14:46:11 +0000 (0:00:40.823) 0:00:46.134 ****** 2026-01-10 14:47:55.091003 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-01-10 14:47:55.091010 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-01-10 14:47:55.091016 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-01-10 14:47:55.091022 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-01-10 14:47:55.091029 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-01-10 14:47:55.091035 | orchestrator | 2026-01-10 14:47:55.091041 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-01-10 14:47:55.091047 | orchestrator | Saturday 10 January 2026 14:46:14 +0000 (0:00:03.617) 0:00:49.751 ****** 2026-01-10 14:47:55.091054 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-01-10 14:47:55.091060 | orchestrator | 2026-01-10 14:47:55.091066 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-01-10 14:47:55.091072 | orchestrator | Saturday 10 January 2026 14:46:15 +0000 (0:00:00.408) 0:00:50.159 ****** 2026-01-10 14:47:55.091078 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:47:55.091085 | orchestrator | 2026-01-10 14:47:55.091091 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-01-10 14:47:55.091110 | orchestrator | Saturday 10 January 2026 14:46:15 +0000 (0:00:00.143) 0:00:50.303 ****** 2026-01-10 14:47:55.091118 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:47:55.091125 | orchestrator | 2026-01-10 14:47:55.091131 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-01-10 14:47:55.091138 | orchestrator | Saturday 10 January 2026 14:46:15 +0000 (0:00:00.513) 0:00:50.816 ****** 2026-01-10 14:47:55.091145 | orchestrator | changed: [testbed-manager] 2026-01-10 14:47:55.091151 | orchestrator | 2026-01-10 14:47:55.091158 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-01-10 14:47:55.091165 | orchestrator | Saturday 10 January 2026 14:46:17 +0000 (0:00:01.328) 0:00:52.145 ****** 2026-01-10 14:47:55.091171 | orchestrator | changed: [testbed-manager] 2026-01-10 14:47:55.091178 | orchestrator | 2026-01-10 14:47:55.091185 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-01-10 14:47:55.091191 | orchestrator | Saturday 10 January 2026 14:46:17 +0000 (0:00:00.709) 0:00:52.854 ****** 2026-01-10 14:47:55.091198 | orchestrator | changed: [testbed-manager] 2026-01-10 14:47:55.091212 | orchestrator | 2026-01-10 14:47:55.091219 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-01-10 14:47:55.091231 | orchestrator | Saturday 10 January 2026 14:46:18 +0000 (0:00:00.522) 0:00:53.377 ****** 2026-01-10 14:47:55.091246 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-01-10 14:47:55.091253 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-01-10 14:47:55.091259 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-01-10 14:47:55.091266 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-01-10 14:47:55.091272 | orchestrator | 2026-01-10 14:47:55.091278 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:47:55.091285 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:47:55.091292 | orchestrator | 2026-01-10 14:47:55.091305 | orchestrator | 2026-01-10 14:47:55.091321 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:47:55.091328 | orchestrator | Saturday 10 January 2026 14:46:19 +0000 (0:00:01.381) 0:00:54.758 ****** 2026-01-10 14:47:55.091334 | orchestrator | =============================================================================== 2026-01-10 14:47:55.091341 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 40.82s 2026-01-10 14:47:55.091347 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.62s 2026-01-10 14:47:55.091353 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.58s 2026-01-10 14:47:55.091360 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.38s 2026-01-10 14:47:55.091367 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.34s 2026-01-10 14:47:55.091374 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.33s 2026-01-10 14:47:55.091381 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.97s 2026-01-10 14:47:55.091388 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.95s 2026-01-10 14:47:55.091395 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.71s 2026-01-10 14:47:55.091402 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.52s 2026-01-10 14:47:55.091409 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.51s 2026-01-10 14:47:55.091416 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.41s 2026-01-10 14:47:55.091423 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.24s 2026-01-10 14:47:55.091430 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2026-01-10 14:47:55.091438 | orchestrator | 2026-01-10 14:47:55.091445 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-10 14:47:55.091453 | orchestrator | 2.16.14 2026-01-10 14:47:55.091460 | orchestrator | 2026-01-10 14:47:55.091468 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-01-10 14:47:55.091475 | orchestrator | 2026-01-10 14:47:55.091482 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-01-10 14:47:55.091490 | orchestrator | Saturday 10 January 2026 14:46:23 +0000 (0:00:00.239) 0:00:00.239 ****** 2026-01-10 14:47:55.091498 | orchestrator | changed: [testbed-manager] 2026-01-10 14:47:55.091506 | orchestrator | 2026-01-10 14:47:55.091514 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-01-10 14:47:55.091520 | orchestrator | Saturday 10 January 2026 14:46:25 +0000 (0:00:01.834) 0:00:02.073 ****** 2026-01-10 14:47:55.091527 | orchestrator | changed: [testbed-manager] 2026-01-10 14:47:55.091534 | orchestrator | 2026-01-10 14:47:55.091541 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-01-10 14:47:55.091547 | orchestrator | Saturday 10 January 2026 14:46:26 +0000 (0:00:01.088) 0:00:03.162 ****** 2026-01-10 14:47:55.091554 | orchestrator | changed: [testbed-manager] 2026-01-10 14:47:55.091560 | orchestrator | 2026-01-10 14:47:55.091567 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-01-10 14:47:55.091574 | orchestrator | Saturday 10 January 2026 14:46:27 +0000 (0:00:01.141) 0:00:04.304 ****** 2026-01-10 14:47:55.091581 | orchestrator | changed: [testbed-manager] 2026-01-10 14:47:55.091588 | orchestrator | 2026-01-10 14:47:55.091595 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-01-10 14:47:55.091602 | orchestrator | Saturday 10 January 2026 14:46:29 +0000 (0:00:01.195) 0:00:05.500 ****** 2026-01-10 14:47:55.091609 | orchestrator | changed: [testbed-manager] 2026-01-10 14:47:55.091615 | orchestrator | 2026-01-10 14:47:55.091622 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-01-10 14:47:55.091629 | orchestrator | Saturday 10 January 2026 14:46:30 +0000 (0:00:00.972) 0:00:06.472 ****** 2026-01-10 14:47:55.091636 | orchestrator | changed: [testbed-manager] 2026-01-10 14:47:55.091649 | orchestrator | 2026-01-10 14:47:55.091656 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-01-10 14:47:55.091663 | orchestrator | Saturday 10 January 2026 14:46:31 +0000 (0:00:01.006) 0:00:07.479 ****** 2026-01-10 14:47:55.091670 | orchestrator | changed: [testbed-manager] 2026-01-10 14:47:55.091677 | orchestrator | 2026-01-10 14:47:55.091684 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-01-10 14:47:55.091690 | orchestrator | Saturday 10 January 2026 14:46:32 +0000 (0:00:01.130) 0:00:08.609 ****** 2026-01-10 14:47:55.091697 | orchestrator | changed: [testbed-manager] 2026-01-10 14:47:55.091704 | orchestrator | 2026-01-10 14:47:55.091710 | orchestrator | TASK [Create admin user] ******************************************************* 2026-01-10 14:47:55.091717 | orchestrator | Saturday 10 January 2026 14:46:33 +0000 (0:00:01.117) 0:00:09.726 ****** 2026-01-10 14:47:55.091724 | orchestrator | changed: [testbed-manager] 2026-01-10 14:47:55.091731 | orchestrator | 2026-01-10 14:47:55.091737 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-01-10 14:47:55.091744 | orchestrator | Saturday 10 January 2026 14:47:28 +0000 (0:00:55.235) 0:01:04.962 ****** 2026-01-10 14:47:55.091750 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:47:55.091756 | orchestrator | 2026-01-10 14:47:55.091762 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-10 14:47:55.091768 | orchestrator | 2026-01-10 14:47:55.091779 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-10 14:47:55.091787 | orchestrator | Saturday 10 January 2026 14:47:28 +0000 (0:00:00.152) 0:01:05.114 ****** 2026-01-10 14:47:55.091793 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:47:55.091800 | orchestrator | 2026-01-10 14:47:55.091806 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-10 14:47:55.091813 | orchestrator | 2026-01-10 14:47:55.091820 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-10 14:47:55.091826 | orchestrator | Saturday 10 January 2026 14:47:30 +0000 (0:00:01.795) 0:01:06.910 ****** 2026-01-10 14:47:55.091833 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:47:55.091839 | orchestrator | 2026-01-10 14:47:55.091851 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-10 14:47:55.091858 | orchestrator | 2026-01-10 14:47:55.091865 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-10 14:47:55.091872 | orchestrator | Saturday 10 January 2026 14:47:41 +0000 (0:00:11.421) 0:01:18.331 ****** 2026-01-10 14:47:55.091879 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:47:55.091885 | orchestrator | 2026-01-10 14:47:55.091892 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:47:55.091900 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:47:55.091907 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:47:55.091914 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:47:55.091922 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:47:55.091929 | orchestrator | 2026-01-10 14:47:55.091935 | orchestrator | 2026-01-10 14:47:55.091942 | orchestrator | 2026-01-10 14:47:55.091949 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:47:55.091956 | orchestrator | Saturday 10 January 2026 14:47:53 +0000 (0:00:11.269) 0:01:29.601 ****** 2026-01-10 14:47:55.091963 | orchestrator | =============================================================================== 2026-01-10 14:47:55.091970 | orchestrator | Create admin user ------------------------------------------------------ 55.24s 2026-01-10 14:47:55.091977 | orchestrator | Restart ceph manager service ------------------------------------------- 24.49s 2026-01-10 14:47:55.091990 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.83s 2026-01-10 14:47:55.091998 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.20s 2026-01-10 14:47:55.092004 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.14s 2026-01-10 14:47:55.092011 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.13s 2026-01-10 14:47:55.092018 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.12s 2026-01-10 14:47:55.092025 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.09s 2026-01-10 14:47:55.092031 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.01s 2026-01-10 14:47:55.092038 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.97s 2026-01-10 14:47:55.092045 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.15s 2026-01-10 14:47:55.093877 | orchestrator | 2026-01-10 14:47:55 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:47:55.095353 | orchestrator | 2026-01-10 14:47:55 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:47:55.095386 | orchestrator | 2026-01-10 14:47:55 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:58.127995 | orchestrator | 2026-01-10 14:47:58 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:47:58.129385 | orchestrator | 2026-01-10 14:47:58 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:47:58.130009 | orchestrator | 2026-01-10 14:47:58 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:47:58.130726 | orchestrator | 2026-01-10 14:47:58 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:47:58.130797 | orchestrator | 2026-01-10 14:47:58 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:01.172118 | orchestrator | 2026-01-10 14:48:01 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:48:01.172569 | orchestrator | 2026-01-10 14:48:01 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:48:01.173771 | orchestrator | 2026-01-10 14:48:01 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:48:01.174256 | orchestrator | 2026-01-10 14:48:01 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:48:01.174295 | orchestrator | 2026-01-10 14:48:01 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:04.212708 | orchestrator | 2026-01-10 14:48:04 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:48:04.213188 | orchestrator | 2026-01-10 14:48:04 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:48:04.213816 | orchestrator | 2026-01-10 14:48:04 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:48:04.214503 | orchestrator | 2026-01-10 14:48:04 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:48:04.214523 | orchestrator | 2026-01-10 14:48:04 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:07.254946 | orchestrator | 2026-01-10 14:48:07 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:48:07.255554 | orchestrator | 2026-01-10 14:48:07 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:48:07.257285 | orchestrator | 2026-01-10 14:48:07 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:48:07.257707 | orchestrator | 2026-01-10 14:48:07 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:48:07.257794 | orchestrator | 2026-01-10 14:48:07 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:10.285365 | orchestrator | 2026-01-10 14:48:10 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:48:10.286298 | orchestrator | 2026-01-10 14:48:10 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:48:10.286621 | orchestrator | 2026-01-10 14:48:10 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:48:10.289232 | orchestrator | 2026-01-10 14:48:10 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:48:10.289300 | orchestrator | 2026-01-10 14:48:10 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:13.317972 | orchestrator | 2026-01-10 14:48:13 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:48:13.318358 | orchestrator | 2026-01-10 14:48:13 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state STARTED 2026-01-10 14:48:13.319188 | orchestrator | 2026-01-10 14:48:13 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:48:13.319767 | orchestrator | 2026-01-10 14:48:13 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:48:13.319801 | orchestrator | 2026-01-10 14:48:13 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:16.346635 | orchestrator | 2026-01-10 14:48:16 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:48:16.347879 | orchestrator | 2026-01-10 14:48:16 | INFO  | Task c84560a3-2c2e-44ed-8959-cd0d95ccba88 is in state SUCCESS 2026-01-10 14:48:16.349223 | orchestrator | 2026-01-10 14:48:16.349261 | orchestrator | 2026-01-10 14:48:16.349269 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:48:16.349276 | orchestrator | 2026-01-10 14:48:16.349283 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:48:16.349289 | orchestrator | Saturday 10 January 2026 14:46:10 +0000 (0:00:00.265) 0:00:00.265 ****** 2026-01-10 14:48:16.349295 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:48:16.349304 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:48:16.349307 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:48:16.349311 | orchestrator | 2026-01-10 14:48:16.349315 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:48:16.349319 | orchestrator | Saturday 10 January 2026 14:46:10 +0000 (0:00:00.399) 0:00:00.665 ****** 2026-01-10 14:48:16.349323 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-01-10 14:48:16.349327 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-01-10 14:48:16.349331 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-01-10 14:48:16.349335 | orchestrator | 2026-01-10 14:48:16.349338 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-01-10 14:48:16.349342 | orchestrator | 2026-01-10 14:48:16.349346 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-10 14:48:16.349349 | orchestrator | Saturday 10 January 2026 14:46:11 +0000 (0:00:00.503) 0:00:01.169 ****** 2026-01-10 14:48:16.349353 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:48:16.349357 | orchestrator | 2026-01-10 14:48:16.349361 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-01-10 14:48:16.349365 | orchestrator | Saturday 10 January 2026 14:46:11 +0000 (0:00:00.497) 0:00:01.666 ****** 2026-01-10 14:48:16.349369 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-01-10 14:48:16.349372 | orchestrator | 2026-01-10 14:48:16.349376 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-01-10 14:48:16.349525 | orchestrator | Saturday 10 January 2026 14:46:15 +0000 (0:00:03.518) 0:00:05.185 ****** 2026-01-10 14:48:16.349533 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-01-10 14:48:16.349550 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-01-10 14:48:16.349554 | orchestrator | 2026-01-10 14:48:16.349558 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-01-10 14:48:16.349562 | orchestrator | Saturday 10 January 2026 14:46:22 +0000 (0:00:07.470) 0:00:12.656 ****** 2026-01-10 14:48:16.349566 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-10 14:48:16.349570 | orchestrator | 2026-01-10 14:48:16.349573 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-01-10 14:48:16.349577 | orchestrator | Saturday 10 January 2026 14:46:26 +0000 (0:00:03.601) 0:00:16.258 ****** 2026-01-10 14:48:16.349581 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-10 14:48:16.349585 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-01-10 14:48:16.349588 | orchestrator | 2026-01-10 14:48:16.349592 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-01-10 14:48:16.349596 | orchestrator | Saturday 10 January 2026 14:46:30 +0000 (0:00:04.274) 0:00:20.532 ****** 2026-01-10 14:48:16.349599 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-10 14:48:16.349603 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-01-10 14:48:16.349607 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-01-10 14:48:16.349611 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-01-10 14:48:16.349615 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-01-10 14:48:16.349618 | orchestrator | 2026-01-10 14:48:16.349622 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-01-10 14:48:16.349626 | orchestrator | Saturday 10 January 2026 14:46:45 +0000 (0:00:15.204) 0:00:35.736 ****** 2026-01-10 14:48:16.349629 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-01-10 14:48:16.349633 | orchestrator | 2026-01-10 14:48:16.349637 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-01-10 14:48:16.349641 | orchestrator | Saturday 10 January 2026 14:46:49 +0000 (0:00:03.525) 0:00:39.262 ****** 2026-01-10 14:48:16.349646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-10 14:48:16.349659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-10 14:48:16.349669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-10 14:48:16.349674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:16.349678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:16.349683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:16.349691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:16.349696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:16.349703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:16.349707 | orchestrator | 2026-01-10 14:48:16.349711 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-01-10 14:48:16.349716 | orchestrator | Saturday 10 January 2026 14:46:51 +0000 (0:00:02.037) 0:00:41.299 ****** 2026-01-10 14:48:16.349720 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-01-10 14:48:16.349724 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-01-10 14:48:16.349728 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-01-10 14:48:16.349732 | orchestrator | 2026-01-10 14:48:16.349735 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-01-10 14:48:16.349739 | orchestrator | Saturday 10 January 2026 14:46:52 +0000 (0:00:00.982) 0:00:42.281 ****** 2026-01-10 14:48:16.349743 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:16.349747 | orchestrator | 2026-01-10 14:48:16.349751 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-01-10 14:48:16.349754 | orchestrator | Saturday 10 January 2026 14:46:52 +0000 (0:00:00.200) 0:00:42.482 ****** 2026-01-10 14:48:16.349758 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:16.349762 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:16.349766 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:16.349769 | orchestrator | 2026-01-10 14:48:16.349773 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-10 14:48:16.349777 | orchestrator | Saturday 10 January 2026 14:46:53 +0000 (0:00:00.934) 0:00:43.417 ****** 2026-01-10 14:48:16.349781 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:48:16.349784 | orchestrator | 2026-01-10 14:48:16.349788 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-01-10 14:48:16.349792 | orchestrator | Saturday 10 January 2026 14:46:53 +0000 (0:00:00.523) 0:00:43.946 ****** 2026-01-10 14:48:16.349796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-10 14:48:16.349806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-10 14:48:16.349811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-10 14:48:16.349817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:16.349821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:16.349825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:16.349829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:16.349838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:16.349842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:16.349846 | orchestrator | 2026-01-10 14:48:16.349850 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-01-10 14:48:16.349854 | orchestrator | Saturday 10 January 2026 14:46:57 +0000 (0:00:03.679) 0:00:47.626 ****** 2026-01-10 14:48:16.349860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-10 14:48:16.349864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:48:16.349869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:48:16.349879 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:16.349910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-10 14:48:16.349918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:48:16.349926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-10 14:48:16.349931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:48:16.349934 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:16.349938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:48:16.349946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:48:16.349952 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:16.349961 | orchestrator | 2026-01-10 14:48:16.349973 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-01-10 14:48:16.349979 | orchestrator | Saturday 10 January 2026 14:47:00 +0000 (0:00:02.503) 0:00:50.130 ****** 2026-01-10 14:48:16.349985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-10 14:48:16.349995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:48:16.350001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:48:16.350007 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:16.350113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-10 14:48:16.350181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:48:16.350198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:48:16.350204 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:16.350211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-10 14:48:16.350223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:48:16.350232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:48:16.350250 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:16.350257 | orchestrator | 2026-01-10 14:48:16.350263 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-01-10 14:48:16.350267 | orchestrator | Saturday 10 January 2026 14:47:01 +0000 (0:00:01.440) 0:00:51.570 ****** 2026-01-10 14:48:16.350271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-10 14:48:16.350280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-10 14:48:16.350286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-10 14:48:16.350290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:16.350294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:16.350301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:16.350309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:16.350313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:16.350317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:16.350321 | orchestrator | 2026-01-10 14:48:16.350324 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-01-10 14:48:16.350328 | orchestrator | Saturday 10 January 2026 14:47:05 +0000 (0:00:04.012) 0:00:55.583 ****** 2026-01-10 14:48:16.350332 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:48:16.350337 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:48:16.350341 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:48:16.350345 | orchestrator | 2026-01-10 14:48:16.350349 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-01-10 14:48:16.350352 | orchestrator | Saturday 10 January 2026 14:47:08 +0000 (0:00:03.278) 0:00:58.861 ****** 2026-01-10 14:48:16.350356 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:48:16.350360 | orchestrator | 2026-01-10 14:48:16.350364 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-01-10 14:48:16.350370 | orchestrator | Saturday 10 January 2026 14:47:10 +0000 (0:00:01.863) 0:01:00.725 ****** 2026-01-10 14:48:16.350374 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:16.350377 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:16.350381 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:16.350385 | orchestrator | 2026-01-10 14:48:16.350389 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-01-10 14:48:16.350395 | orchestrator | Saturday 10 January 2026 14:47:11 +0000 (0:00:00.620) 0:01:01.346 ****** 2026-01-10 14:48:16.350401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-10 14:48:16.350410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-10 14:48:16.350417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-10 14:48:16.350427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:16.350437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:16.350443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:16.350450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:16.350460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:16.350467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:16.350474 | orchestrator | 2026-01-10 14:48:16.350481 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-01-10 14:48:16.350486 | orchestrator | Saturday 10 January 2026 14:47:21 +0000 (0:00:10.228) 0:01:11.574 ****** 2026-01-10 14:48:16.350493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-10 14:48:16.350500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:48:16.350504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:48:16.350508 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:16.350515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-10 14:48:16.350522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:48:16.350528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:48:16.350537 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:16.350545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-10 14:48:16.350552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:48:16.350557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:48:16.350563 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:16.350568 | orchestrator | 2026-01-10 14:48:16.350574 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-01-10 14:48:16.350579 | orchestrator | Saturday 10 January 2026 14:47:23 +0000 (0:00:01.855) 0:01:13.430 ****** 2026-01-10 14:48:16.350587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-10 14:48:16.350594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-10 14:48:16.350605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-10 14:48:16.350611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:16.350618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:16.350629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:16.350634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:16.350643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:16.350648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:16.350652 | orchestrator | 2026-01-10 14:48:16.350656 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-10 14:48:16.350660 | orchestrator | Saturday 10 January 2026 14:47:28 +0000 (0:00:04.815) 0:01:18.245 ****** 2026-01-10 14:48:16.350664 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:16.350667 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:16.350671 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:16.350675 | orchestrator | 2026-01-10 14:48:16.350679 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-01-10 14:48:16.350682 | orchestrator | Saturday 10 January 2026 14:47:29 +0000 (0:00:01.131) 0:01:19.377 ****** 2026-01-10 14:48:16.350686 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:48:16.350690 | orchestrator | 2026-01-10 14:48:16.350693 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-01-10 14:48:16.350697 | orchestrator | Saturday 10 January 2026 14:47:32 +0000 (0:00:03.014) 0:01:22.391 ****** 2026-01-10 14:48:16.350701 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:48:16.350704 | orchestrator | 2026-01-10 14:48:16.350708 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-01-10 14:48:16.350712 | orchestrator | Saturday 10 January 2026 14:47:35 +0000 (0:00:02.893) 0:01:25.284 ****** 2026-01-10 14:48:16.350718 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:48:16.350725 | orchestrator | 2026-01-10 14:48:16.350731 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-10 14:48:16.350737 | orchestrator | Saturday 10 January 2026 14:47:47 +0000 (0:00:12.491) 0:01:37.776 ****** 2026-01-10 14:48:16.350744 | orchestrator | 2026-01-10 14:48:16.350750 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-10 14:48:16.350757 | orchestrator | Saturday 10 January 2026 14:47:47 +0000 (0:00:00.061) 0:01:37.837 ****** 2026-01-10 14:48:16.350762 | orchestrator | 2026-01-10 14:48:16.350769 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-10 14:48:16.350775 | orchestrator | Saturday 10 January 2026 14:47:47 +0000 (0:00:00.074) 0:01:37.912 ****** 2026-01-10 14:48:16.350781 | orchestrator | 2026-01-10 14:48:16.350788 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-01-10 14:48:16.350792 | orchestrator | Saturday 10 January 2026 14:47:48 +0000 (0:00:00.077) 0:01:37.990 ****** 2026-01-10 14:48:16.350795 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:48:16.350801 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:48:16.350814 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:48:16.350822 | orchestrator | 2026-01-10 14:48:16.350828 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-01-10 14:48:16.350835 | orchestrator | Saturday 10 January 2026 14:47:56 +0000 (0:00:08.885) 0:01:46.876 ****** 2026-01-10 14:48:16.350841 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:48:16.350847 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:48:16.350857 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:48:16.350863 | orchestrator | 2026-01-10 14:48:16.350870 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-01-10 14:48:16.350876 | orchestrator | Saturday 10 January 2026 14:48:07 +0000 (0:00:10.269) 0:01:57.145 ****** 2026-01-10 14:48:16.350882 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:48:16.350888 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:48:16.350895 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:48:16.350901 | orchestrator | 2026-01-10 14:48:16.350907 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:48:16.350915 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:48:16.350922 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-10 14:48:16.350928 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-10 14:48:16.350935 | orchestrator | 2026-01-10 14:48:16.350940 | orchestrator | 2026-01-10 14:48:16.350945 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:48:16.350949 | orchestrator | Saturday 10 January 2026 14:48:14 +0000 (0:00:06.922) 0:02:04.068 ****** 2026-01-10 14:48:16.350953 | orchestrator | =============================================================================== 2026-01-10 14:48:16.350957 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.20s 2026-01-10 14:48:16.350962 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.49s 2026-01-10 14:48:16.350966 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 10.27s 2026-01-10 14:48:16.350970 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 10.23s 2026-01-10 14:48:16.350974 | orchestrator | barbican : Restart barbican-api container ------------------------------- 8.89s 2026-01-10 14:48:16.350978 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.47s 2026-01-10 14:48:16.350986 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 6.92s 2026-01-10 14:48:16.350990 | orchestrator | barbican : Check barbican containers ------------------------------------ 4.82s 2026-01-10 14:48:16.350995 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.27s 2026-01-10 14:48:16.350999 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.01s 2026-01-10 14:48:16.351003 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.68s 2026-01-10 14:48:16.351008 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.60s 2026-01-10 14:48:16.351012 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.53s 2026-01-10 14:48:16.351016 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.52s 2026-01-10 14:48:16.351020 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 3.28s 2026-01-10 14:48:16.351024 | orchestrator | barbican : Creating barbican database ----------------------------------- 3.01s 2026-01-10 14:48:16.351028 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.89s 2026-01-10 14:48:16.351033 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 2.50s 2026-01-10 14:48:16.351037 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.04s 2026-01-10 14:48:16.351045 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.86s 2026-01-10 14:48:16.351049 | orchestrator | 2026-01-10 14:48:16 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state STARTED 2026-01-10 14:48:16.351054 | orchestrator | 2026-01-10 14:48:16 | INFO  | Task 3f9c4f62-7f3f-456b-ba56-4b6eacaf0627 is in state STARTED 2026-01-10 14:48:16.351058 | orchestrator | 2026-01-10 14:48:16 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:48:16.351062 | orchestrator | 2026-01-10 14:48:16 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:19.380858 | orchestrator | 2026-01-10 14:48:19 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:48:19.381027 | orchestrator | 2026-01-10 14:48:19 | INFO  | Task 3ff09c9d-e59e-4f49-a56b-241b5307dbd4 is in state SUCCESS 2026-01-10 14:48:19.381648 | orchestrator | 2026-01-10 14:48:19 | INFO  | Task 3f9c4f62-7f3f-456b-ba56-4b6eacaf0627 is in state STARTED 2026-01-10 14:48:19.382261 | orchestrator | 2026-01-10 14:48:19 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:48:19.382351 | orchestrator | 2026-01-10 14:48:19 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:22.409606 | orchestrator | 2026-01-10 14:48:22 | INFO  | Task ec657c91-e16f-4aa7-9012-31df0c93af90 is in state STARTED 2026-01-10 14:48:22.411965 | orchestrator | 2026-01-10 14:48:22 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:48:22.415964 | orchestrator | 2026-01-10 14:48:22 | INFO  | Task 3f9c4f62-7f3f-456b-ba56-4b6eacaf0627 is in state STARTED 2026-01-10 14:48:22.416873 | orchestrator | 2026-01-10 14:48:22 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:48:22.416933 | orchestrator | 2026-01-10 14:48:22 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:25.440795 | orchestrator | 2026-01-10 14:48:25 | INFO  | Task ec657c91-e16f-4aa7-9012-31df0c93af90 is in state STARTED 2026-01-10 14:48:25.440892 | orchestrator | 2026-01-10 14:48:25 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:48:25.440905 | orchestrator | 2026-01-10 14:48:25 | INFO  | Task 3f9c4f62-7f3f-456b-ba56-4b6eacaf0627 is in state STARTED 2026-01-10 14:48:25.441701 | orchestrator | 2026-01-10 14:48:25 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:48:25.441730 | orchestrator | 2026-01-10 14:48:25 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:28.483970 | orchestrator | 2026-01-10 14:48:28 | INFO  | Task ec657c91-e16f-4aa7-9012-31df0c93af90 is in state STARTED 2026-01-10 14:48:28.484558 | orchestrator | 2026-01-10 14:48:28 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:48:28.485480 | orchestrator | 2026-01-10 14:48:28 | INFO  | Task 3f9c4f62-7f3f-456b-ba56-4b6eacaf0627 is in state STARTED 2026-01-10 14:48:28.486282 | orchestrator | 2026-01-10 14:48:28 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:48:28.486379 | orchestrator | 2026-01-10 14:48:28 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:31.522612 | orchestrator | 2026-01-10 14:48:31 | INFO  | Task ec657c91-e16f-4aa7-9012-31df0c93af90 is in state STARTED 2026-01-10 14:48:31.523184 | orchestrator | 2026-01-10 14:48:31 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:48:31.523793 | orchestrator | 2026-01-10 14:48:31 | INFO  | Task 3f9c4f62-7f3f-456b-ba56-4b6eacaf0627 is in state STARTED 2026-01-10 14:48:31.524691 | orchestrator | 2026-01-10 14:48:31 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:48:31.524747 | orchestrator | 2026-01-10 14:48:31 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:34.564513 | orchestrator | 2026-01-10 14:48:34 | INFO  | Task ec657c91-e16f-4aa7-9012-31df0c93af90 is in state STARTED 2026-01-10 14:48:34.567920 | orchestrator | 2026-01-10 14:48:34 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:48:34.570459 | orchestrator | 2026-01-10 14:48:34 | INFO  | Task 3f9c4f62-7f3f-456b-ba56-4b6eacaf0627 is in state STARTED 2026-01-10 14:48:34.572789 | orchestrator | 2026-01-10 14:48:34 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:48:34.572856 | orchestrator | 2026-01-10 14:48:34 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:37.611842 | orchestrator | 2026-01-10 14:48:37 | INFO  | Task ec657c91-e16f-4aa7-9012-31df0c93af90 is in state STARTED 2026-01-10 14:48:37.614577 | orchestrator | 2026-01-10 14:48:37 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:48:37.616679 | orchestrator | 2026-01-10 14:48:37 | INFO  | Task 3f9c4f62-7f3f-456b-ba56-4b6eacaf0627 is in state STARTED 2026-01-10 14:48:37.619215 | orchestrator | 2026-01-10 14:48:37 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:48:37.619272 | orchestrator | 2026-01-10 14:48:37 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:40.664140 | orchestrator | 2026-01-10 14:48:40 | INFO  | Task ec657c91-e16f-4aa7-9012-31df0c93af90 is in state STARTED 2026-01-10 14:48:40.664561 | orchestrator | 2026-01-10 14:48:40 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:48:40.665778 | orchestrator | 2026-01-10 14:48:40 | INFO  | Task 3f9c4f62-7f3f-456b-ba56-4b6eacaf0627 is in state STARTED 2026-01-10 14:48:40.666983 | orchestrator | 2026-01-10 14:48:40 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:48:40.667019 | orchestrator | 2026-01-10 14:48:40 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:43.712776 | orchestrator | 2026-01-10 14:48:43 | INFO  | Task ec657c91-e16f-4aa7-9012-31df0c93af90 is in state STARTED 2026-01-10 14:48:43.714551 | orchestrator | 2026-01-10 14:48:43 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:48:43.716336 | orchestrator | 2026-01-10 14:48:43 | INFO  | Task 3f9c4f62-7f3f-456b-ba56-4b6eacaf0627 is in state STARTED 2026-01-10 14:48:43.717962 | orchestrator | 2026-01-10 14:48:43 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:48:43.718096 | orchestrator | 2026-01-10 14:48:43 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:46.758627 | orchestrator | 2026-01-10 14:48:46 | INFO  | Task ec657c91-e16f-4aa7-9012-31df0c93af90 is in state STARTED 2026-01-10 14:48:46.759592 | orchestrator | 2026-01-10 14:48:46 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:48:46.760487 | orchestrator | 2026-01-10 14:48:46 | INFO  | Task 3f9c4f62-7f3f-456b-ba56-4b6eacaf0627 is in state STARTED 2026-01-10 14:48:46.761645 | orchestrator | 2026-01-10 14:48:46 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:48:46.761952 | orchestrator | 2026-01-10 14:48:46 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:49.804318 | orchestrator | 2026-01-10 14:48:49 | INFO  | Task ec657c91-e16f-4aa7-9012-31df0c93af90 is in state STARTED 2026-01-10 14:48:49.804366 | orchestrator | 2026-01-10 14:48:49 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:48:49.804996 | orchestrator | 2026-01-10 14:48:49 | INFO  | Task 3f9c4f62-7f3f-456b-ba56-4b6eacaf0627 is in state STARTED 2026-01-10 14:48:49.806298 | orchestrator | 2026-01-10 14:48:49 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:48:49.806329 | orchestrator | 2026-01-10 14:48:49 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:52.835277 | orchestrator | 2026-01-10 14:48:52 | INFO  | Task ec657c91-e16f-4aa7-9012-31df0c93af90 is in state STARTED 2026-01-10 14:48:52.838387 | orchestrator | 2026-01-10 14:48:52 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:48:52.841421 | orchestrator | 2026-01-10 14:48:52 | INFO  | Task 3f9c4f62-7f3f-456b-ba56-4b6eacaf0627 is in state STARTED 2026-01-10 14:48:52.842823 | orchestrator | 2026-01-10 14:48:52 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:48:52.843251 | orchestrator | 2026-01-10 14:48:52 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:55.888269 | orchestrator | 2026-01-10 14:48:55 | INFO  | Task ec657c91-e16f-4aa7-9012-31df0c93af90 is in state STARTED 2026-01-10 14:48:55.889142 | orchestrator | 2026-01-10 14:48:55 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:48:55.892194 | orchestrator | 2026-01-10 14:48:55 | INFO  | Task 3f9c4f62-7f3f-456b-ba56-4b6eacaf0627 is in state STARTED 2026-01-10 14:48:55.893211 | orchestrator | 2026-01-10 14:48:55 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:48:55.893286 | orchestrator | 2026-01-10 14:48:55 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:58.938934 | orchestrator | 2026-01-10 14:48:58 | INFO  | Task ec657c91-e16f-4aa7-9012-31df0c93af90 is in state STARTED 2026-01-10 14:48:58.940991 | orchestrator | 2026-01-10 14:48:58 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:48:58.943532 | orchestrator | 2026-01-10 14:48:58 | INFO  | Task 3f9c4f62-7f3f-456b-ba56-4b6eacaf0627 is in state STARTED 2026-01-10 14:48:58.944594 | orchestrator | 2026-01-10 14:48:58 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:48:58.944632 | orchestrator | 2026-01-10 14:48:58 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:01.993952 | orchestrator | 2026-01-10 14:49:01 | INFO  | Task ec657c91-e16f-4aa7-9012-31df0c93af90 is in state STARTED 2026-01-10 14:49:01.996695 | orchestrator | 2026-01-10 14:49:01 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:49:01.998450 | orchestrator | 2026-01-10 14:49:01 | INFO  | Task 3f9c4f62-7f3f-456b-ba56-4b6eacaf0627 is in state STARTED 2026-01-10 14:49:02.000120 | orchestrator | 2026-01-10 14:49:02 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:49:02.000196 | orchestrator | 2026-01-10 14:49:02 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:05.055157 | orchestrator | 2026-01-10 14:49:05 | INFO  | Task ec657c91-e16f-4aa7-9012-31df0c93af90 is in state STARTED 2026-01-10 14:49:05.055245 | orchestrator | 2026-01-10 14:49:05 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state STARTED 2026-01-10 14:49:05.057759 | orchestrator | 2026-01-10 14:49:05 | INFO  | Task 3f9c4f62-7f3f-456b-ba56-4b6eacaf0627 is in state STARTED 2026-01-10 14:49:05.058991 | orchestrator | 2026-01-10 14:49:05 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:49:05.059138 | orchestrator | 2026-01-10 14:49:05 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:08.100430 | orchestrator | 2026-01-10 14:49:08 | INFO  | Task ec657c91-e16f-4aa7-9012-31df0c93af90 is in state STARTED 2026-01-10 14:49:08.110283 | orchestrator | 2026-01-10 14:49:08 | INFO  | Task eb9051fe-92a1-413c-8ece-65fea6179abd is in state SUCCESS 2026-01-10 14:49:08.111661 | orchestrator | 2026-01-10 14:49:08.111717 | orchestrator | 2026-01-10 14:49:08.111724 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-01-10 14:49:08.111729 | orchestrator | 2026-01-10 14:49:08.111733 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-01-10 14:49:08.111737 | orchestrator | Saturday 10 January 2026 14:46:09 +0000 (0:00:00.081) 0:00:00.081 ****** 2026-01-10 14:49:08.111741 | orchestrator | changed: [localhost] 2026-01-10 14:49:08.111747 | orchestrator | 2026-01-10 14:49:08.111751 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-01-10 14:49:08.111755 | orchestrator | Saturday 10 January 2026 14:46:10 +0000 (0:00:00.908) 0:00:00.989 ****** 2026-01-10 14:49:08.111759 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2026-01-10 14:49:08.111763 | orchestrator | changed: [localhost] 2026-01-10 14:49:08.111767 | orchestrator | 2026-01-10 14:49:08.111771 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-01-10 14:49:08.111775 | orchestrator | Saturday 10 January 2026 14:47:05 +0000 (0:00:54.428) 0:00:55.418 ****** 2026-01-10 14:49:08.111778 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (3 retries left). 2026-01-10 14:49:08.111782 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (2 retries left). 2026-01-10 14:49:08.111786 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (1 retries left). 2026-01-10 14:49:08.111790 | orchestrator | changed: [localhost] 2026-01-10 14:49:08.111793 | orchestrator | 2026-01-10 14:49:08.111797 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:49:08.111801 | orchestrator | 2026-01-10 14:49:08.111805 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:49:08.111808 | orchestrator | Saturday 10 January 2026 14:48:16 +0000 (0:01:11.109) 0:02:06.528 ****** 2026-01-10 14:49:08.111823 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:49:08.111827 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:49:08.111831 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:49:08.111834 | orchestrator | 2026-01-10 14:49:08.111838 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:49:08.111842 | orchestrator | Saturday 10 January 2026 14:48:16 +0000 (0:00:00.246) 0:02:06.774 ****** 2026-01-10 14:49:08.111845 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-01-10 14:49:08.111849 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-01-10 14:49:08.111853 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-01-10 14:49:08.111857 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-01-10 14:49:08.111861 | orchestrator | 2026-01-10 14:49:08.111865 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-01-10 14:49:08.111868 | orchestrator | skipping: no hosts matched 2026-01-10 14:49:08.111873 | orchestrator | 2026-01-10 14:49:08.111877 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:49:08.111880 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:49:08.111895 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:49:08.111900 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:49:08.111909 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:49:08.111929 | orchestrator | 2026-01-10 14:49:08.111933 | orchestrator | 2026-01-10 14:49:08.111937 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:49:08.111940 | orchestrator | Saturday 10 January 2026 14:48:17 +0000 (0:00:01.174) 0:02:07.949 ****** 2026-01-10 14:49:08.111944 | orchestrator | =============================================================================== 2026-01-10 14:49:08.111948 | orchestrator | Download ironic-agent kernel ------------------------------------------- 71.11s 2026-01-10 14:49:08.111952 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 54.43s 2026-01-10 14:49:08.111955 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.17s 2026-01-10 14:49:08.111959 | orchestrator | Ensure the destination directory exists --------------------------------- 0.91s 2026-01-10 14:49:08.111963 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.25s 2026-01-10 14:49:08.111966 | orchestrator | 2026-01-10 14:49:08.111970 | orchestrator | 2026-01-10 14:49:08.111974 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:49:08.111978 | orchestrator | 2026-01-10 14:49:08.111982 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:49:08.111985 | orchestrator | Saturday 10 January 2026 14:46:10 +0000 (0:00:00.228) 0:00:00.228 ****** 2026-01-10 14:49:08.111989 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:49:08.111993 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:49:08.111997 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:49:08.112000 | orchestrator | 2026-01-10 14:49:08.112004 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:49:08.112008 | orchestrator | Saturday 10 January 2026 14:46:10 +0000 (0:00:00.445) 0:00:00.674 ****** 2026-01-10 14:49:08.112046 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-01-10 14:49:08.112052 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-01-10 14:49:08.112058 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-01-10 14:49:08.112064 | orchestrator | 2026-01-10 14:49:08.112071 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-01-10 14:49:08.112078 | orchestrator | 2026-01-10 14:49:08.112084 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-10 14:49:08.112102 | orchestrator | Saturday 10 January 2026 14:46:11 +0000 (0:00:00.611) 0:00:01.285 ****** 2026-01-10 14:49:08.112106 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:49:08.112110 | orchestrator | 2026-01-10 14:49:08.112114 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-01-10 14:49:08.112118 | orchestrator | Saturday 10 January 2026 14:46:11 +0000 (0:00:00.658) 0:00:01.943 ****** 2026-01-10 14:49:08.112121 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-01-10 14:49:08.112125 | orchestrator | 2026-01-10 14:49:08.112172 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-01-10 14:49:08.112177 | orchestrator | Saturday 10 January 2026 14:46:15 +0000 (0:00:03.819) 0:00:05.763 ****** 2026-01-10 14:49:08.112181 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-01-10 14:49:08.112210 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-01-10 14:49:08.112215 | orchestrator | 2026-01-10 14:49:08.112219 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-01-10 14:49:08.112223 | orchestrator | Saturday 10 January 2026 14:46:22 +0000 (0:00:06.883) 0:00:12.646 ****** 2026-01-10 14:49:08.112227 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-01-10 14:49:08.112231 | orchestrator | 2026-01-10 14:49:08.112235 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-01-10 14:49:08.112238 | orchestrator | Saturday 10 January 2026 14:46:26 +0000 (0:00:04.245) 0:00:16.892 ****** 2026-01-10 14:49:08.112242 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-10 14:49:08.112256 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-01-10 14:49:08.112260 | orchestrator | 2026-01-10 14:49:08.112264 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-01-10 14:49:08.112269 | orchestrator | Saturday 10 January 2026 14:46:31 +0000 (0:00:04.376) 0:00:21.269 ****** 2026-01-10 14:49:08.112273 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-10 14:49:08.112278 | orchestrator | 2026-01-10 14:49:08.112283 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-01-10 14:49:08.112289 | orchestrator | Saturday 10 January 2026 14:46:34 +0000 (0:00:03.355) 0:00:24.625 ****** 2026-01-10 14:49:08.112295 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-01-10 14:49:08.112301 | orchestrator | 2026-01-10 14:49:08.112307 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-01-10 14:49:08.112335 | orchestrator | Saturday 10 January 2026 14:46:38 +0000 (0:00:03.604) 0:00:28.230 ****** 2026-01-10 14:49:08.112346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-10 14:49:08.112380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-10 14:49:08.112396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-10 14:49:08.112405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:49:08.112482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:49:08.112489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:49:08.112495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.112503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.112516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.112524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.112542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.112549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.112556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.112563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.112570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.112582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.112597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.112605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.112610 | orchestrator | 2026-01-10 14:49:08.112615 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-01-10 14:49:08.112619 | orchestrator | Saturday 10 January 2026 14:46:40 +0000 (0:00:02.744) 0:00:30.974 ****** 2026-01-10 14:49:08.112623 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:49:08.112627 | orchestrator | 2026-01-10 14:49:08.112632 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-01-10 14:49:08.112637 | orchestrator | Saturday 10 January 2026 14:46:40 +0000 (0:00:00.131) 0:00:31.106 ****** 2026-01-10 14:49:08.112641 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:49:08.112645 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:49:08.112650 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:49:08.112654 | orchestrator | 2026-01-10 14:49:08.112658 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-10 14:49:08.112663 | orchestrator | Saturday 10 January 2026 14:46:41 +0000 (0:00:00.298) 0:00:31.405 ****** 2026-01-10 14:49:08.112667 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:49:08.112671 | orchestrator | 2026-01-10 14:49:08.112675 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-01-10 14:49:08.112679 | orchestrator | Saturday 10 January 2026 14:46:41 +0000 (0:00:00.625) 0:00:32.031 ****** 2026-01-10 14:49:08.112683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-10 14:49:08.112689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-10 14:49:08.112697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-10 14:49:08.112704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:49:08.112708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:49:08.112712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:49:08.112715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.112722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.112730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.112740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.112744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.112748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.112751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.112755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.112766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.112770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.112777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.112781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.112785 | orchestrator | 2026-01-10 14:49:08.112788 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-01-10 14:49:08.112792 | orchestrator | Saturday 10 January 2026 14:46:47 +0000 (0:00:05.591) 0:00:37.623 ****** 2026-01-10 14:49:08.112796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-10 14:49:08.112800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:49:08.113284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.113308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.113322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.113329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.113335 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:49:08.113340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-10 14:49:08.113351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:49:08.113369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.113374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.113381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.113385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.113388 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:49:08.113392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-10 14:49:08.113400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:49:08.113414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.113419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.113425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.113429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.113433 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:49:08.113437 | orchestrator | 2026-01-10 14:49:08.113441 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-01-10 14:49:08.113445 | orchestrator | Saturday 10 January 2026 14:46:48 +0000 (0:00:00.774) 0:00:38.397 ****** 2026-01-10 14:49:08.113449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-10 14:49:08.113456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:49:08.113471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.113475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.113482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.113486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.113490 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:49:08.113494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-10 14:49:08.113502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:49:08.113516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.113520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.113528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.113532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.113536 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:49:08.113540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-10 14:49:08.113547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:49:08.113560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.113565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.113571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.113575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.113584 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:49:08.113588 | orchestrator | 2026-01-10 14:49:08.113592 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-01-10 14:49:08.113596 | orchestrator | Saturday 10 January 2026 14:46:50 +0000 (0:00:02.000) 0:00:40.397 ****** 2026-01-10 14:49:08.113600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-10 14:49:08.113604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-10 14:49:08.113617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-10 14:49:08.113624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:49:08.113629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:49:08.113636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:49:08.113640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.113644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.113657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.113662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.113675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.113682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.113686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.113690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.113702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.113707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.113711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.113718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.113725 | orchestrator | 2026-01-10 14:49:08.113729 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-01-10 14:49:08.113733 | orchestrator | Saturday 10 January 2026 14:46:57 +0000 (0:00:06.812) 0:00:47.209 ****** 2026-01-10 14:49:08.113737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-10 14:49:08.113741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-10 14:49:08.113747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-10 14:49:08.113751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:49:08.113760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:49:08.113764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:49:08.113768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.113772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.113779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.113784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.113790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.113797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.113801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.113804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.113808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.113816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.113820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.113830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.113834 | orchestrator | 2026-01-10 14:49:08.113838 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-01-10 14:49:08.113842 | orchestrator | Saturday 10 January 2026 14:47:19 +0000 (0:00:22.136) 0:01:09.346 ****** 2026-01-10 14:49:08.113846 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-10 14:49:08.113849 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-10 14:49:08.113853 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-10 14:49:08.113857 | orchestrator | 2026-01-10 14:49:08.113903 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-01-10 14:49:08.113907 | orchestrator | Saturday 10 January 2026 14:47:27 +0000 (0:00:07.853) 0:01:17.200 ****** 2026-01-10 14:49:08.113911 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-10 14:49:08.113915 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-10 14:49:08.113919 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-10 14:49:08.113923 | orchestrator | 2026-01-10 14:49:08.113926 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-01-10 14:49:08.113931 | orchestrator | Saturday 10 January 2026 14:47:31 +0000 (0:00:04.399) 0:01:21.599 ****** 2026-01-10 14:49:08.113935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-10 14:49:08.113943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-10 14:49:08.113955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-10 14:49:08.113960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:49:08.113964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:49:08.113968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.113971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.113979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.113988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.113994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.113998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.114002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:49:08.114006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.114077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.114092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.114097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.114103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.114108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.114112 | orchestrator | 2026-01-10 14:49:08.114116 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-01-10 14:49:08.114120 | orchestrator | Saturday 10 January 2026 14:47:35 +0000 (0:00:04.114) 0:01:25.714 ****** 2026-01-10 14:49:08.114124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-10 14:49:08.114131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-10 14:49:08.114138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-10 14:49:08.114144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:49:08.114148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.114153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.114157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.114167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:49:08.114171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.114177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.114181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.114185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:49:08.114190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.114194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.114205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.114209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.114216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.114220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.114224 | orchestrator | 2026-01-10 14:49:08.114228 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-10 14:49:08.114232 | orchestrator | Saturday 10 January 2026 14:47:39 +0000 (0:00:03.824) 0:01:29.539 ****** 2026-01-10 14:49:08.114236 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:49:08.114240 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:49:08.114243 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:49:08.114247 | orchestrator | 2026-01-10 14:49:08.114251 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-01-10 14:49:08.114255 | orchestrator | Saturday 10 January 2026 14:47:39 +0000 (0:00:00.533) 0:01:30.072 ****** 2026-01-10 14:49:08.114259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-10 14:49:08.114268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:49:08.114273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.114279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.114283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.114287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.114291 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:49:08.114295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-10 14:49:08.114304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:49:08.114308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.114314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.114318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.114322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.114326 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:49:08.114330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-10 14:49:08.114340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:49:08.114344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.114348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.114355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.114360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:49:08.114366 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:49:08.114370 | orchestrator | 2026-01-10 14:49:08.114374 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-01-10 14:49:08.114378 | orchestrator | Saturday 10 January 2026 14:47:41 +0000 (0:00:01.651) 0:01:31.723 ****** 2026-01-10 14:49:08.114382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-10 14:49:08.114389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-10 14:49:08.114393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-10 14:49:08.114400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:49:08.114404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:49:08.114411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:49:08.114418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.114427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.114433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.114442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.114448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.114458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.114464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.114472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.114483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.114489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.114498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.114505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:08.114516 | orchestrator | 2026-01-10 14:49:08.114522 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-10 14:49:08.114528 | orchestrator | Saturday 10 January 2026 14:47:45 +0000 (0:00:04.408) 0:01:36.132 ****** 2026-01-10 14:49:08.114535 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:49:08.114541 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:49:08.114547 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:49:08.114554 | orchestrator | 2026-01-10 14:49:08.114561 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-01-10 14:49:08.114567 | orchestrator | Saturday 10 January 2026 14:47:46 +0000 (0:00:00.320) 0:01:36.452 ****** 2026-01-10 14:49:08.114574 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-01-10 14:49:08.114579 | orchestrator | 2026-01-10 14:49:08.114583 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-01-10 14:49:08.114588 | orchestrator | Saturday 10 January 2026 14:47:48 +0000 (0:00:01.954) 0:01:38.408 ****** 2026-01-10 14:49:08.114593 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-10 14:49:08.114598 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-01-10 14:49:08.114602 | orchestrator | 2026-01-10 14:49:08.114607 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-01-10 14:49:08.114611 | orchestrator | Saturday 10 January 2026 14:47:50 +0000 (0:00:02.287) 0:01:40.695 ****** 2026-01-10 14:49:08.114616 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:49:08.114620 | orchestrator | 2026-01-10 14:49:08.114624 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-10 14:49:08.114629 | orchestrator | Saturday 10 January 2026 14:48:06 +0000 (0:00:16.057) 0:01:56.753 ****** 2026-01-10 14:49:08.114633 | orchestrator | 2026-01-10 14:49:08.114638 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-10 14:49:08.114643 | orchestrator | Saturday 10 January 2026 14:48:06 +0000 (0:00:00.119) 0:01:56.872 ****** 2026-01-10 14:49:08.114647 | orchestrator | 2026-01-10 14:49:08.114651 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-10 14:49:08.114656 | orchestrator | Saturday 10 January 2026 14:48:06 +0000 (0:00:00.141) 0:01:57.013 ****** 2026-01-10 14:49:08.114661 | orchestrator | 2026-01-10 14:49:08.114665 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-01-10 14:49:08.114668 | orchestrator | Saturday 10 January 2026 14:48:07 +0000 (0:00:00.142) 0:01:57.155 ****** 2026-01-10 14:49:08.114672 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:49:08.114676 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:49:08.114680 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:49:08.114684 | orchestrator | 2026-01-10 14:49:08.114692 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-01-10 14:49:08.114696 | orchestrator | Saturday 10 January 2026 14:48:16 +0000 (0:00:09.652) 0:02:06.808 ****** 2026-01-10 14:49:08.114700 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:49:08.114704 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:49:08.114707 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:49:08.114711 | orchestrator | 2026-01-10 14:49:08.114715 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-01-10 14:49:08.114719 | orchestrator | Saturday 10 January 2026 14:48:26 +0000 (0:00:10.312) 0:02:17.120 ****** 2026-01-10 14:49:08.114723 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:49:08.114727 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:49:08.114731 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:49:08.114739 | orchestrator | 2026-01-10 14:49:08.114743 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-01-10 14:49:08.114747 | orchestrator | Saturday 10 January 2026 14:48:34 +0000 (0:00:07.398) 0:02:24.519 ****** 2026-01-10 14:49:08.114751 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:49:08.114754 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:49:08.114758 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:49:08.114762 | orchestrator | 2026-01-10 14:49:08.114766 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-01-10 14:49:08.114770 | orchestrator | Saturday 10 January 2026 14:48:44 +0000 (0:00:10.284) 0:02:34.804 ****** 2026-01-10 14:49:08.114774 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:49:08.114777 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:49:08.114781 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:49:08.114785 | orchestrator | 2026-01-10 14:49:08.114789 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-01-10 14:49:08.114793 | orchestrator | Saturday 10 January 2026 14:48:49 +0000 (0:00:04.957) 0:02:39.761 ****** 2026-01-10 14:49:08.114797 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:49:08.114801 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:49:08.114805 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:49:08.114808 | orchestrator | 2026-01-10 14:49:08.114812 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-01-10 14:49:08.114816 | orchestrator | Saturday 10 January 2026 14:48:58 +0000 (0:00:08.466) 0:02:48.228 ****** 2026-01-10 14:49:08.114820 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:49:08.114824 | orchestrator | 2026-01-10 14:49:08.114828 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:49:08.114832 | orchestrator | testbed-node-0 : ok=29  changed=24  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:49:08.114838 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-10 14:49:08.114841 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-10 14:49:08.114845 | orchestrator | 2026-01-10 14:49:08.114849 | orchestrator | 2026-01-10 14:49:08.114853 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:49:08.114857 | orchestrator | Saturday 10 January 2026 14:49:04 +0000 (0:00:06.755) 0:02:54.983 ****** 2026-01-10 14:49:08.114860 | orchestrator | =============================================================================== 2026-01-10 14:49:08.114864 | orchestrator | designate : Copying over designate.conf -------------------------------- 22.14s 2026-01-10 14:49:08.114868 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.06s 2026-01-10 14:49:08.114872 | orchestrator | designate : Restart designate-api container ---------------------------- 10.31s 2026-01-10 14:49:08.114876 | orchestrator | designate : Restart designate-producer container ----------------------- 10.28s 2026-01-10 14:49:08.114879 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 9.65s 2026-01-10 14:49:08.114883 | orchestrator | designate : Restart designate-worker container -------------------------- 8.47s 2026-01-10 14:49:08.114887 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 7.85s 2026-01-10 14:49:08.114891 | orchestrator | designate : Restart designate-central container ------------------------- 7.40s 2026-01-10 14:49:08.114895 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.88s 2026-01-10 14:49:08.114899 | orchestrator | designate : Copying over config.json files for services ----------------- 6.81s 2026-01-10 14:49:08.114903 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 6.76s 2026-01-10 14:49:08.114906 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 5.59s 2026-01-10 14:49:08.114914 | orchestrator | designate : Restart designate-mdns container ---------------------------- 4.96s 2026-01-10 14:49:08.114918 | orchestrator | designate : Check designate containers ---------------------------------- 4.41s 2026-01-10 14:49:08.114922 | orchestrator | designate : Copying over named.conf ------------------------------------- 4.40s 2026-01-10 14:49:08.114926 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.38s 2026-01-10 14:49:08.114930 | orchestrator | service-ks-register : designate | Creating projects --------------------- 4.25s 2026-01-10 14:49:08.114933 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 4.11s 2026-01-10 14:49:08.114937 | orchestrator | designate : Copying over rndc.key --------------------------------------- 3.82s 2026-01-10 14:49:08.114970 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.82s 2026-01-10 14:49:08.114979 | orchestrator | 2026-01-10 14:49:08 | INFO  | Task 6d33964d-76ec-42fa-a698-a7d640acde77 is in state STARTED 2026-01-10 14:49:08.116690 | orchestrator | 2026-01-10 14:49:08 | INFO  | Task 3f9c4f62-7f3f-456b-ba56-4b6eacaf0627 is in state STARTED 2026-01-10 14:49:08.119360 | orchestrator | 2026-01-10 14:49:08 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:49:08.119417 | orchestrator | 2026-01-10 14:49:08 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:11.163670 | orchestrator | 2026-01-10 14:49:11 | INFO  | Task ec657c91-e16f-4aa7-9012-31df0c93af90 is in state STARTED 2026-01-10 14:49:11.163730 | orchestrator | 2026-01-10 14:49:11 | INFO  | Task 6d33964d-76ec-42fa-a698-a7d640acde77 is in state STARTED 2026-01-10 14:49:11.164278 | orchestrator | 2026-01-10 14:49:11 | INFO  | Task 3f9c4f62-7f3f-456b-ba56-4b6eacaf0627 is in state STARTED 2026-01-10 14:49:11.167322 | orchestrator | 2026-01-10 14:49:11 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:49:11.167393 | orchestrator | 2026-01-10 14:49:11 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:14.216224 | orchestrator | 2026-01-10 14:49:14 | INFO  | Task ec657c91-e16f-4aa7-9012-31df0c93af90 is in state STARTED 2026-01-10 14:49:14.217139 | orchestrator | 2026-01-10 14:49:14 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:49:14.217904 | orchestrator | 2026-01-10 14:49:14 | INFO  | Task 6d33964d-76ec-42fa-a698-a7d640acde77 is in state SUCCESS 2026-01-10 14:49:14.219956 | orchestrator | 2026-01-10 14:49:14 | INFO  | Task 3f9c4f62-7f3f-456b-ba56-4b6eacaf0627 is in state STARTED 2026-01-10 14:49:14.220572 | orchestrator | 2026-01-10 14:49:14 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:49:14.220598 | orchestrator | 2026-01-10 14:49:14 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:17.254561 | orchestrator | 2026-01-10 14:49:17 | INFO  | Task ec657c91-e16f-4aa7-9012-31df0c93af90 is in state STARTED 2026-01-10 14:49:17.256822 | orchestrator | 2026-01-10 14:49:17 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:49:17.257765 | orchestrator | 2026-01-10 14:49:17 | INFO  | Task 3f9c4f62-7f3f-456b-ba56-4b6eacaf0627 is in state STARTED 2026-01-10 14:49:17.258609 | orchestrator | 2026-01-10 14:49:17 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:49:17.258730 | orchestrator | 2026-01-10 14:49:17 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:20.295893 | orchestrator | 2026-01-10 14:49:20 | INFO  | Task ec657c91-e16f-4aa7-9012-31df0c93af90 is in state STARTED 2026-01-10 14:49:20.298080 | orchestrator | 2026-01-10 14:49:20 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:49:20.301594 | orchestrator | 2026-01-10 14:49:20 | INFO  | Task 3f9c4f62-7f3f-456b-ba56-4b6eacaf0627 is in state STARTED 2026-01-10 14:49:20.303098 | orchestrator | 2026-01-10 14:49:20 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:49:20.303156 | orchestrator | 2026-01-10 14:49:20 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:23.328040 | orchestrator | 2026-01-10 14:49:23 | INFO  | Task ec657c91-e16f-4aa7-9012-31df0c93af90 is in state STARTED 2026-01-10 14:49:23.328750 | orchestrator | 2026-01-10 14:49:23 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:49:23.329599 | orchestrator | 2026-01-10 14:49:23 | INFO  | Task 3f9c4f62-7f3f-456b-ba56-4b6eacaf0627 is in state STARTED 2026-01-10 14:49:23.330808 | orchestrator | 2026-01-10 14:49:23 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:49:23.330847 | orchestrator | 2026-01-10 14:49:23 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:26.363411 | orchestrator | 2026-01-10 14:49:26 | INFO  | Task ec657c91-e16f-4aa7-9012-31df0c93af90 is in state STARTED 2026-01-10 14:49:26.364013 | orchestrator | 2026-01-10 14:49:26 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:49:26.364390 | orchestrator | 2026-01-10 14:49:26 | INFO  | Task 3f9c4f62-7f3f-456b-ba56-4b6eacaf0627 is in state STARTED 2026-01-10 14:49:26.366061 | orchestrator | 2026-01-10 14:49:26 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:49:26.366100 | orchestrator | 2026-01-10 14:49:26 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:29.400345 | orchestrator | 2026-01-10 14:49:29 | INFO  | Task ec657c91-e16f-4aa7-9012-31df0c93af90 is in state STARTED 2026-01-10 14:49:29.400886 | orchestrator | 2026-01-10 14:49:29 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:49:29.401932 | orchestrator | 2026-01-10 14:49:29 | INFO  | Task 3f9c4f62-7f3f-456b-ba56-4b6eacaf0627 is in state SUCCESS 2026-01-10 14:49:29.402740 | orchestrator | 2026-01-10 14:49:29.402765 | orchestrator | 2026-01-10 14:49:29.402773 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:49:29.402780 | orchestrator | 2026-01-10 14:49:29.402786 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:49:29.402792 | orchestrator | Saturday 10 January 2026 14:49:09 +0000 (0:00:00.182) 0:00:00.182 ****** 2026-01-10 14:49:29.402799 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:49:29.402806 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:49:29.402812 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:49:29.402818 | orchestrator | 2026-01-10 14:49:29.402824 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:49:29.402830 | orchestrator | Saturday 10 January 2026 14:49:09 +0000 (0:00:00.311) 0:00:00.494 ****** 2026-01-10 14:49:29.402837 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-01-10 14:49:29.402843 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-01-10 14:49:29.402849 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-01-10 14:49:29.402855 | orchestrator | 2026-01-10 14:49:29.402862 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-01-10 14:49:29.402868 | orchestrator | 2026-01-10 14:49:29.402874 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-01-10 14:49:29.402881 | orchestrator | Saturday 10 January 2026 14:49:10 +0000 (0:00:00.778) 0:00:01.272 ****** 2026-01-10 14:49:29.402887 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:49:29.402893 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:49:29.402900 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:49:29.402906 | orchestrator | 2026-01-10 14:49:29.402971 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:49:29.403031 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:49:29.403039 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:49:29.403047 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:49:29.403054 | orchestrator | 2026-01-10 14:49:29.403061 | orchestrator | 2026-01-10 14:49:29.403068 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:49:29.403075 | orchestrator | Saturday 10 January 2026 14:49:11 +0000 (0:00:00.663) 0:00:01.935 ****** 2026-01-10 14:49:29.403081 | orchestrator | =============================================================================== 2026-01-10 14:49:29.403088 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.78s 2026-01-10 14:49:29.403094 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.66s 2026-01-10 14:49:29.403101 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-01-10 14:49:29.403107 | orchestrator | 2026-01-10 14:49:29.403113 | orchestrator | 2026-01-10 14:49:29.403120 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:49:29.403126 | orchestrator | 2026-01-10 14:49:29.403132 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:49:29.403138 | orchestrator | Saturday 10 January 2026 14:48:20 +0000 (0:00:00.564) 0:00:00.564 ****** 2026-01-10 14:49:29.403144 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:49:29.403150 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:49:29.403157 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:49:29.403163 | orchestrator | 2026-01-10 14:49:29.403169 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:49:29.403175 | orchestrator | Saturday 10 January 2026 14:48:21 +0000 (0:00:00.515) 0:00:01.079 ****** 2026-01-10 14:49:29.403181 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-01-10 14:49:29.403187 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-01-10 14:49:29.403193 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-01-10 14:49:29.403199 | orchestrator | 2026-01-10 14:49:29.403205 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-01-10 14:49:29.403211 | orchestrator | 2026-01-10 14:49:29.403217 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-10 14:49:29.403223 | orchestrator | Saturday 10 January 2026 14:48:21 +0000 (0:00:00.642) 0:00:01.722 ****** 2026-01-10 14:49:29.403229 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:49:29.403235 | orchestrator | 2026-01-10 14:49:29.403241 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-01-10 14:49:29.403248 | orchestrator | Saturday 10 January 2026 14:48:22 +0000 (0:00:00.491) 0:00:02.213 ****** 2026-01-10 14:49:29.403254 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-01-10 14:49:29.403260 | orchestrator | 2026-01-10 14:49:29.403266 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-01-10 14:49:29.403272 | orchestrator | Saturday 10 January 2026 14:48:26 +0000 (0:00:03.744) 0:00:05.958 ****** 2026-01-10 14:49:29.403278 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-01-10 14:49:29.403285 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-01-10 14:49:29.403291 | orchestrator | 2026-01-10 14:49:29.403297 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-01-10 14:49:29.403303 | orchestrator | Saturday 10 January 2026 14:48:32 +0000 (0:00:06.320) 0:00:12.279 ****** 2026-01-10 14:49:29.403309 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-10 14:49:29.403321 | orchestrator | 2026-01-10 14:49:29.403327 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-01-10 14:49:29.403333 | orchestrator | Saturday 10 January 2026 14:48:36 +0000 (0:00:03.973) 0:00:16.252 ****** 2026-01-10 14:49:29.403350 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-10 14:49:29.403357 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-01-10 14:49:29.403363 | orchestrator | 2026-01-10 14:49:29.403370 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-01-10 14:49:29.403376 | orchestrator | Saturday 10 January 2026 14:48:40 +0000 (0:00:04.271) 0:00:20.524 ****** 2026-01-10 14:49:29.403383 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-10 14:49:29.403389 | orchestrator | 2026-01-10 14:49:29.403395 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-01-10 14:49:29.403401 | orchestrator | Saturday 10 January 2026 14:48:43 +0000 (0:00:03.154) 0:00:23.679 ****** 2026-01-10 14:49:29.403407 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-01-10 14:49:29.403413 | orchestrator | 2026-01-10 14:49:29.403419 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-10 14:49:29.403425 | orchestrator | Saturday 10 January 2026 14:48:47 +0000 (0:00:03.381) 0:00:27.060 ****** 2026-01-10 14:49:29.403431 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:49:29.403437 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:49:29.403443 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:49:29.403450 | orchestrator | 2026-01-10 14:49:29.403456 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-01-10 14:49:29.403462 | orchestrator | Saturday 10 January 2026 14:48:47 +0000 (0:00:00.305) 0:00:27.365 ****** 2026-01-10 14:49:29.403474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-10 14:49:29.403484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-10 14:49:29.403492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-10 14:49:29.403503 | orchestrator | 2026-01-10 14:49:29.403509 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-01-10 14:49:29.403516 | orchestrator | Saturday 10 January 2026 14:48:48 +0000 (0:00:00.834) 0:00:28.200 ****** 2026-01-10 14:49:29.403523 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:49:29.403528 | orchestrator | 2026-01-10 14:49:29.403535 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-01-10 14:49:29.403544 | orchestrator | Saturday 10 January 2026 14:48:48 +0000 (0:00:00.136) 0:00:28.336 ****** 2026-01-10 14:49:29.403551 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:49:29.403558 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:49:29.403564 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:49:29.403571 | orchestrator | 2026-01-10 14:49:29.403578 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-10 14:49:29.403585 | orchestrator | Saturday 10 January 2026 14:48:49 +0000 (0:00:00.497) 0:00:28.834 ****** 2026-01-10 14:49:29.403591 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:49:29.403597 | orchestrator | 2026-01-10 14:49:29.403603 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-01-10 14:49:29.403608 | orchestrator | Saturday 10 January 2026 14:48:49 +0000 (0:00:00.517) 0:00:29.351 ****** 2026-01-10 14:49:29.403618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-10 14:49:29.403625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-10 14:49:29.403690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-10 14:49:29.403704 | orchestrator | 2026-01-10 14:49:29.403710 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-01-10 14:49:29.403716 | orchestrator | Saturday 10 January 2026 14:48:51 +0000 (0:00:01.586) 0:00:30.937 ****** 2026-01-10 14:49:29.403728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-10 14:49:29.403735 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:49:29.403744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-10 14:49:29.403750 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:49:29.403755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-10 14:49:29.403761 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:49:29.403767 | orchestrator | 2026-01-10 14:49:29.403786 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-01-10 14:49:29.403796 | orchestrator | Saturday 10 January 2026 14:48:52 +0000 (0:00:00.973) 0:00:31.911 ****** 2026-01-10 14:49:29.403802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-10 14:49:29.403808 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:49:29.403819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-10 14:49:29.403826 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:49:29.403837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-10 14:49:29.403843 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:49:29.403850 | orchestrator | 2026-01-10 14:49:29.403856 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-01-10 14:49:29.403862 | orchestrator | Saturday 10 January 2026 14:48:52 +0000 (0:00:00.635) 0:00:32.546 ****** 2026-01-10 14:49:29.403869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-10 14:49:29.403879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-10 14:49:29.403891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-10 14:49:29.403897 | orchestrator | 2026-01-10 14:49:29.403904 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-01-10 14:49:29.403910 | orchestrator | Saturday 10 January 2026 14:48:54 +0000 (0:00:01.327) 0:00:33.873 ****** 2026-01-10 14:49:29.403916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-10 14:49:29.403924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-10 14:49:29.403934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-10 14:49:29.403939 | orchestrator | 2026-01-10 14:49:29.403945 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-01-10 14:49:29.403950 | orchestrator | Saturday 10 January 2026 14:48:56 +0000 (0:00:02.241) 0:00:36.115 ****** 2026-01-10 14:49:29.403955 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-01-10 14:49:29.403962 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-01-10 14:49:29.403969 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-01-10 14:49:29.403975 | orchestrator | 2026-01-10 14:49:29.403982 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-01-10 14:49:29.404039 | orchestrator | Saturday 10 January 2026 14:48:57 +0000 (0:00:01.264) 0:00:37.379 ****** 2026-01-10 14:49:29.404046 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:49:29.404053 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:49:29.404059 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:49:29.404065 | orchestrator | 2026-01-10 14:49:29.404071 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-01-10 14:49:29.404077 | orchestrator | Saturday 10 January 2026 14:48:58 +0000 (0:00:01.137) 0:00:38.517 ****** 2026-01-10 14:49:29.404090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-10 14:49:29.404097 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:49:29.404106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-10 14:49:29.404120 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:49:29.404127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-10 14:49:29.404134 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:49:29.404140 | orchestrator | 2026-01-10 14:49:29.404147 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-01-10 14:49:29.404153 | orchestrator | Saturday 10 January 2026 14:48:59 +0000 (0:00:00.449) 0:00:38.967 ****** 2026-01-10 14:49:29.404160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-10 14:49:29.404172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-10 14:49:29.404182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-10 14:49:29.404193 | orchestrator | 2026-01-10 14:49:29.404200 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-01-10 14:49:29.404207 | orchestrator | Saturday 10 January 2026 14:49:00 +0000 (0:00:01.055) 0:00:40.022 ****** 2026-01-10 14:49:29.404214 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:49:29.404221 | orchestrator | 2026-01-10 14:49:29.404228 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-01-10 14:49:29.404234 | orchestrator | Saturday 10 January 2026 14:49:02 +0000 (0:00:02.255) 0:00:42.278 ****** 2026-01-10 14:49:29.404241 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:49:29.404248 | orchestrator | 2026-01-10 14:49:29.404255 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-01-10 14:49:29.404262 | orchestrator | Saturday 10 January 2026 14:49:04 +0000 (0:00:02.206) 0:00:44.485 ****** 2026-01-10 14:49:29.404269 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:49:29.404276 | orchestrator | 2026-01-10 14:49:29.404283 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-10 14:49:29.404290 | orchestrator | Saturday 10 January 2026 14:49:18 +0000 (0:00:13.895) 0:00:58.382 ****** 2026-01-10 14:49:29.404297 | orchestrator | 2026-01-10 14:49:29.404303 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-10 14:49:29.404310 | orchestrator | Saturday 10 January 2026 14:49:18 +0000 (0:00:00.219) 0:00:58.601 ****** 2026-01-10 14:49:29.404317 | orchestrator | 2026-01-10 14:49:29.404324 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-10 14:49:29.404331 | orchestrator | Saturday 10 January 2026 14:49:18 +0000 (0:00:00.175) 0:00:58.777 ****** 2026-01-10 14:49:29.404338 | orchestrator | 2026-01-10 14:49:29.404345 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-01-10 14:49:29.404352 | orchestrator | Saturday 10 January 2026 14:49:19 +0000 (0:00:00.281) 0:00:59.059 ****** 2026-01-10 14:49:29.404359 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:49:29.404366 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:49:29.404373 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:49:29.404380 | orchestrator | 2026-01-10 14:49:29.404388 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:49:29.404395 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-10 14:49:29.404403 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-10 14:49:29.404410 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-10 14:49:29.404418 | orchestrator | 2026-01-10 14:49:29.404424 | orchestrator | 2026-01-10 14:49:29.404431 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:49:29.404438 | orchestrator | Saturday 10 January 2026 14:49:26 +0000 (0:00:07.133) 0:01:06.192 ****** 2026-01-10 14:49:29.404445 | orchestrator | =============================================================================== 2026-01-10 14:49:29.404452 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.90s 2026-01-10 14:49:29.404459 | orchestrator | placement : Restart placement-api container ----------------------------- 7.13s 2026-01-10 14:49:29.404465 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.32s 2026-01-10 14:49:29.404477 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.27s 2026-01-10 14:49:29.404484 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.97s 2026-01-10 14:49:29.404491 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.74s 2026-01-10 14:49:29.404502 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.38s 2026-01-10 14:49:29.404509 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.15s 2026-01-10 14:49:29.404516 | orchestrator | placement : Creating placement databases -------------------------------- 2.26s 2026-01-10 14:49:29.404523 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.24s 2026-01-10 14:49:29.404530 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.21s 2026-01-10 14:49:29.404536 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.59s 2026-01-10 14:49:29.404543 | orchestrator | placement : Copying over config.json files for services ----------------- 1.33s 2026-01-10 14:49:29.404550 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.26s 2026-01-10 14:49:29.404557 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.14s 2026-01-10 14:49:29.404564 | orchestrator | placement : Check placement containers ---------------------------------- 1.06s 2026-01-10 14:49:29.404571 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.97s 2026-01-10 14:49:29.404578 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.83s 2026-01-10 14:49:29.404584 | orchestrator | placement : Flush handlers ---------------------------------------------- 0.68s 2026-01-10 14:49:29.404591 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.64s 2026-01-10 14:49:29.404602 | orchestrator | 2026-01-10 14:49:29 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:49:29.404609 | orchestrator | 2026-01-10 14:49:29 | INFO  | Task 1cf9f347-4f7e-4a89-a8a9-0300b4019c27 is in state STARTED 2026-01-10 14:49:29.404616 | orchestrator | 2026-01-10 14:49:29 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:32.456781 | orchestrator | 2026-01-10 14:49:32 | INFO  | Task ec657c91-e16f-4aa7-9012-31df0c93af90 is in state STARTED 2026-01-10 14:49:32.457184 | orchestrator | 2026-01-10 14:49:32 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:49:32.459781 | orchestrator | 2026-01-10 14:49:32 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:49:32.463415 | orchestrator | 2026-01-10 14:49:32 | INFO  | Task 1cf9f347-4f7e-4a89-a8a9-0300b4019c27 is in state STARTED 2026-01-10 14:49:32.463456 | orchestrator | 2026-01-10 14:49:32 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:35.530115 | orchestrator | 2026-01-10 14:49:35 | INFO  | Task ec657c91-e16f-4aa7-9012-31df0c93af90 is in state STARTED 2026-01-10 14:49:35.531373 | orchestrator | 2026-01-10 14:49:35 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:49:35.533370 | orchestrator | 2026-01-10 14:49:35 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:49:35.534669 | orchestrator | 2026-01-10 14:49:35 | INFO  | Task 1cf9f347-4f7e-4a89-a8a9-0300b4019c27 is in state STARTED 2026-01-10 14:49:35.534696 | orchestrator | 2026-01-10 14:49:35 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:38.579366 | orchestrator | 2026-01-10 14:49:38 | INFO  | Task ec657c91-e16f-4aa7-9012-31df0c93af90 is in state STARTED 2026-01-10 14:49:38.581294 | orchestrator | 2026-01-10 14:49:38 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:49:38.582465 | orchestrator | 2026-01-10 14:49:38 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:49:38.583903 | orchestrator | 2026-01-10 14:49:38 | INFO  | Task 1cf9f347-4f7e-4a89-a8a9-0300b4019c27 is in state STARTED 2026-01-10 14:49:38.583938 | orchestrator | 2026-01-10 14:49:38 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:41.617793 | orchestrator | 2026-01-10 14:49:41 | INFO  | Task ec657c91-e16f-4aa7-9012-31df0c93af90 is in state STARTED 2026-01-10 14:49:41.618179 | orchestrator | 2026-01-10 14:49:41 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:49:41.619092 | orchestrator | 2026-01-10 14:49:41 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:49:41.619756 | orchestrator | 2026-01-10 14:49:41 | INFO  | Task 1cf9f347-4f7e-4a89-a8a9-0300b4019c27 is in state STARTED 2026-01-10 14:49:41.621624 | orchestrator | 2026-01-10 14:49:41 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:44.673166 | orchestrator | 2026-01-10 14:49:44 | INFO  | Task ec657c91-e16f-4aa7-9012-31df0c93af90 is in state STARTED 2026-01-10 14:49:44.673506 | orchestrator | 2026-01-10 14:49:44 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:49:44.674413 | orchestrator | 2026-01-10 14:49:44 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:49:44.676910 | orchestrator | 2026-01-10 14:49:44 | INFO  | Task 1cf9f347-4f7e-4a89-a8a9-0300b4019c27 is in state STARTED 2026-01-10 14:49:44.677015 | orchestrator | 2026-01-10 14:49:44 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:47.714657 | orchestrator | 2026-01-10 14:49:47 | INFO  | Task ec657c91-e16f-4aa7-9012-31df0c93af90 is in state STARTED 2026-01-10 14:49:47.715115 | orchestrator | 2026-01-10 14:49:47 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:49:47.715437 | orchestrator | 2026-01-10 14:49:47 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:49:47.716091 | orchestrator | 2026-01-10 14:49:47 | INFO  | Task 1cf9f347-4f7e-4a89-a8a9-0300b4019c27 is in state STARTED 2026-01-10 14:49:47.716128 | orchestrator | 2026-01-10 14:49:47 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:50.766807 | orchestrator | 2026-01-10 14:49:50 | INFO  | Task ec657c91-e16f-4aa7-9012-31df0c93af90 is in state STARTED 2026-01-10 14:49:50.766898 | orchestrator | 2026-01-10 14:49:50 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:49:50.766928 | orchestrator | 2026-01-10 14:49:50 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:49:50.766936 | orchestrator | 2026-01-10 14:49:50 | INFO  | Task 1cf9f347-4f7e-4a89-a8a9-0300b4019c27 is in state STARTED 2026-01-10 14:49:50.766943 | orchestrator | 2026-01-10 14:49:50 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:53.794338 | orchestrator | 2026-01-10 14:49:53 | INFO  | Task ec657c91-e16f-4aa7-9012-31df0c93af90 is in state STARTED 2026-01-10 14:49:53.794712 | orchestrator | 2026-01-10 14:49:53 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:49:53.795961 | orchestrator | 2026-01-10 14:49:53 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:49:53.796457 | orchestrator | 2026-01-10 14:49:53 | INFO  | Task 1cf9f347-4f7e-4a89-a8a9-0300b4019c27 is in state STARTED 2026-01-10 14:49:53.796482 | orchestrator | 2026-01-10 14:49:53 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:56.828210 | orchestrator | 2026-01-10 14:49:56 | INFO  | Task ec657c91-e16f-4aa7-9012-31df0c93af90 is in state STARTED 2026-01-10 14:49:56.830540 | orchestrator | 2026-01-10 14:49:56 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:49:56.831698 | orchestrator | 2026-01-10 14:49:56 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:49:56.834173 | orchestrator | 2026-01-10 14:49:56 | INFO  | Task 1cf9f347-4f7e-4a89-a8a9-0300b4019c27 is in state STARTED 2026-01-10 14:49:56.834217 | orchestrator | 2026-01-10 14:49:56 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:59.893797 | orchestrator | 2026-01-10 14:49:59 | INFO  | Task ec657c91-e16f-4aa7-9012-31df0c93af90 is in state STARTED 2026-01-10 14:49:59.893850 | orchestrator | 2026-01-10 14:49:59 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:49:59.893857 | orchestrator | 2026-01-10 14:49:59 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:49:59.893863 | orchestrator | 2026-01-10 14:49:59 | INFO  | Task 1cf9f347-4f7e-4a89-a8a9-0300b4019c27 is in state STARTED 2026-01-10 14:49:59.893868 | orchestrator | 2026-01-10 14:49:59 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:02.899891 | orchestrator | 2026-01-10 14:50:02 | INFO  | Task ec657c91-e16f-4aa7-9012-31df0c93af90 is in state STARTED 2026-01-10 14:50:02.903033 | orchestrator | 2026-01-10 14:50:02 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:50:02.905707 | orchestrator | 2026-01-10 14:50:02 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:50:02.908289 | orchestrator | 2026-01-10 14:50:02 | INFO  | Task 1cf9f347-4f7e-4a89-a8a9-0300b4019c27 is in state STARTED 2026-01-10 14:50:02.908349 | orchestrator | 2026-01-10 14:50:02 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:05.945408 | orchestrator | 2026-01-10 14:50:05 | INFO  | Task ec657c91-e16f-4aa7-9012-31df0c93af90 is in state STARTED 2026-01-10 14:50:05.945773 | orchestrator | 2026-01-10 14:50:05 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:50:05.946496 | orchestrator | 2026-01-10 14:50:05 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:50:05.947212 | orchestrator | 2026-01-10 14:50:05 | INFO  | Task 1cf9f347-4f7e-4a89-a8a9-0300b4019c27 is in state STARTED 2026-01-10 14:50:05.947237 | orchestrator | 2026-01-10 14:50:05 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:09.012336 | orchestrator | 2026-01-10 14:50:09 | INFO  | Task ec657c91-e16f-4aa7-9012-31df0c93af90 is in state STARTED 2026-01-10 14:50:09.016806 | orchestrator | 2026-01-10 14:50:09 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:50:09.018731 | orchestrator | 2026-01-10 14:50:09 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:50:09.020272 | orchestrator | 2026-01-10 14:50:09 | INFO  | Task 1cf9f347-4f7e-4a89-a8a9-0300b4019c27 is in state SUCCESS 2026-01-10 14:50:09.022484 | orchestrator | 2026-01-10 14:50:09 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:50:09.022534 | orchestrator | 2026-01-10 14:50:09 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:12.076324 | orchestrator | 2026-01-10 14:50:12 | INFO  | Task ec657c91-e16f-4aa7-9012-31df0c93af90 is in state STARTED 2026-01-10 14:50:12.077907 | orchestrator | 2026-01-10 14:50:12 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:50:12.080949 | orchestrator | 2026-01-10 14:50:12 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:50:12.083392 | orchestrator | 2026-01-10 14:50:12 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:50:12.083472 | orchestrator | 2026-01-10 14:50:12 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:15.127365 | orchestrator | 2026-01-10 14:50:15 | INFO  | Task ec657c91-e16f-4aa7-9012-31df0c93af90 is in state SUCCESS 2026-01-10 14:50:15.128516 | orchestrator | 2026-01-10 14:50:15.128555 | orchestrator | 2026-01-10 14:50:15.128561 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:50:15.128566 | orchestrator | 2026-01-10 14:50:15.128571 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:50:15.128575 | orchestrator | Saturday 10 January 2026 14:49:31 +0000 (0:00:00.287) 0:00:00.287 ****** 2026-01-10 14:50:15.128580 | orchestrator | ok: [testbed-manager] 2026-01-10 14:50:15.128585 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:50:15.128589 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:50:15.128593 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:50:15.128598 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:50:15.128602 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:50:15.128606 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:50:15.128610 | orchestrator | 2026-01-10 14:50:15.128615 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:50:15.128619 | orchestrator | Saturday 10 January 2026 14:49:32 +0000 (0:00:01.348) 0:00:01.636 ****** 2026-01-10 14:50:15.128624 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-01-10 14:50:15.128628 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-01-10 14:50:15.128633 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-01-10 14:50:15.128637 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-01-10 14:50:15.128641 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-01-10 14:50:15.128645 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-01-10 14:50:15.128649 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-01-10 14:50:15.128653 | orchestrator | 2026-01-10 14:50:15.128657 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-01-10 14:50:15.128660 | orchestrator | 2026-01-10 14:50:15.128664 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-01-10 14:50:15.128668 | orchestrator | Saturday 10 January 2026 14:49:33 +0000 (0:00:00.971) 0:00:02.607 ****** 2026-01-10 14:50:15.128674 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:50:15.128684 | orchestrator | 2026-01-10 14:50:15.128693 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-01-10 14:50:15.128699 | orchestrator | Saturday 10 January 2026 14:49:35 +0000 (0:00:02.039) 0:00:04.646 ****** 2026-01-10 14:50:15.128705 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-01-10 14:50:15.128711 | orchestrator | 2026-01-10 14:50:15.128717 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-01-10 14:50:15.128722 | orchestrator | Saturday 10 January 2026 14:49:39 +0000 (0:00:03.845) 0:00:08.492 ****** 2026-01-10 14:50:15.128728 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-01-10 14:50:15.128735 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-01-10 14:50:15.128741 | orchestrator | 2026-01-10 14:50:15.128746 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-01-10 14:50:15.128752 | orchestrator | Saturday 10 January 2026 14:49:45 +0000 (0:00:06.240) 0:00:14.732 ****** 2026-01-10 14:50:15.128757 | orchestrator | ok: [testbed-manager] => (item=service) 2026-01-10 14:50:15.128763 | orchestrator | 2026-01-10 14:50:15.128769 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-01-10 14:50:15.128789 | orchestrator | Saturday 10 January 2026 14:49:49 +0000 (0:00:03.559) 0:00:18.292 ****** 2026-01-10 14:50:15.128796 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-10 14:50:15.128802 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-01-10 14:50:15.128808 | orchestrator | 2026-01-10 14:50:15.128814 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-01-10 14:50:15.128820 | orchestrator | Saturday 10 January 2026 14:49:53 +0000 (0:00:04.446) 0:00:22.739 ****** 2026-01-10 14:50:15.128827 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-01-10 14:50:15.128832 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-01-10 14:50:15.128837 | orchestrator | 2026-01-10 14:50:15.128843 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-01-10 14:50:15.128849 | orchestrator | Saturday 10 January 2026 14:50:01 +0000 (0:00:07.822) 0:00:30.561 ****** 2026-01-10 14:50:15.128856 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-01-10 14:50:15.128861 | orchestrator | 2026-01-10 14:50:15.128867 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:50:15.128872 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:50:15.128879 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:50:15.128894 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:50:15.128900 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:50:15.128906 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:50:15.129164 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:50:15.129214 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:50:15.129219 | orchestrator | 2026-01-10 14:50:15.129223 | orchestrator | 2026-01-10 14:50:15.129227 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:50:15.129231 | orchestrator | Saturday 10 January 2026 14:50:05 +0000 (0:00:04.152) 0:00:34.714 ****** 2026-01-10 14:50:15.129235 | orchestrator | =============================================================================== 2026-01-10 14:50:15.129239 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 7.82s 2026-01-10 14:50:15.129243 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.24s 2026-01-10 14:50:15.129247 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.45s 2026-01-10 14:50:15.129251 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.15s 2026-01-10 14:50:15.129257 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.85s 2026-01-10 14:50:15.129263 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.56s 2026-01-10 14:50:15.129270 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 2.04s 2026-01-10 14:50:15.129276 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.35s 2026-01-10 14:50:15.129283 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.97s 2026-01-10 14:50:15.129290 | orchestrator | 2026-01-10 14:50:15.129293 | orchestrator | 2026-01-10 14:50:15.129297 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:50:15.129301 | orchestrator | 2026-01-10 14:50:15.129304 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:50:15.129315 | orchestrator | Saturday 10 January 2026 14:48:24 +0000 (0:00:00.256) 0:00:00.256 ****** 2026-01-10 14:50:15.129319 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:50:15.129323 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:50:15.129327 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:50:15.129333 | orchestrator | 2026-01-10 14:50:15.129339 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:50:15.129345 | orchestrator | Saturday 10 January 2026 14:48:24 +0000 (0:00:00.266) 0:00:00.523 ****** 2026-01-10 14:50:15.129352 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-01-10 14:50:15.129358 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-01-10 14:50:15.129365 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-01-10 14:50:15.129371 | orchestrator | 2026-01-10 14:50:15.129377 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-01-10 14:50:15.129384 | orchestrator | 2026-01-10 14:50:15.129390 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-10 14:50:15.129397 | orchestrator | Saturday 10 January 2026 14:48:25 +0000 (0:00:00.583) 0:00:01.106 ****** 2026-01-10 14:50:15.129403 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:50:15.129410 | orchestrator | 2026-01-10 14:50:15.129416 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-01-10 14:50:15.129422 | orchestrator | Saturday 10 January 2026 14:48:26 +0000 (0:00:00.900) 0:00:02.007 ****** 2026-01-10 14:50:15.129429 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-01-10 14:50:15.129436 | orchestrator | 2026-01-10 14:50:15.129442 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-01-10 14:50:15.129448 | orchestrator | Saturday 10 January 2026 14:48:29 +0000 (0:00:03.102) 0:00:05.109 ****** 2026-01-10 14:50:15.129455 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-01-10 14:50:15.129461 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-01-10 14:50:15.129467 | orchestrator | 2026-01-10 14:50:15.129474 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-01-10 14:50:15.129479 | orchestrator | Saturday 10 January 2026 14:48:36 +0000 (0:00:07.195) 0:00:12.305 ****** 2026-01-10 14:50:15.129482 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-10 14:50:15.129486 | orchestrator | 2026-01-10 14:50:15.129490 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-01-10 14:50:15.129494 | orchestrator | Saturday 10 January 2026 14:48:39 +0000 (0:00:03.512) 0:00:15.817 ****** 2026-01-10 14:50:15.129497 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-10 14:50:15.129501 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-01-10 14:50:15.129505 | orchestrator | 2026-01-10 14:50:15.129508 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-01-10 14:50:15.129512 | orchestrator | Saturday 10 January 2026 14:48:43 +0000 (0:00:03.620) 0:00:19.437 ****** 2026-01-10 14:50:15.129516 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-10 14:50:15.129519 | orchestrator | 2026-01-10 14:50:15.129523 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-01-10 14:50:15.129532 | orchestrator | Saturday 10 January 2026 14:48:46 +0000 (0:00:03.090) 0:00:22.528 ****** 2026-01-10 14:50:15.129536 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-01-10 14:50:15.129540 | orchestrator | 2026-01-10 14:50:15.129543 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-01-10 14:50:15.129547 | orchestrator | Saturday 10 January 2026 14:48:50 +0000 (0:00:03.644) 0:00:26.173 ****** 2026-01-10 14:50:15.129551 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:50:15.129555 | orchestrator | 2026-01-10 14:50:15.129558 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-01-10 14:50:15.129571 | orchestrator | Saturday 10 January 2026 14:48:53 +0000 (0:00:03.315) 0:00:29.489 ****** 2026-01-10 14:50:15.129575 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:50:15.129579 | orchestrator | 2026-01-10 14:50:15.129583 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-01-10 14:50:15.129587 | orchestrator | Saturday 10 January 2026 14:48:57 +0000 (0:00:03.739) 0:00:33.229 ****** 2026-01-10 14:50:15.129590 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:50:15.129594 | orchestrator | 2026-01-10 14:50:15.129598 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-01-10 14:50:15.129601 | orchestrator | Saturday 10 January 2026 14:49:00 +0000 (0:00:03.229) 0:00:36.458 ****** 2026-01-10 14:50:15.129607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:50:15.129614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:50:15.129621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:50:15.129631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:15.129648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:15.129655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:15.129662 | orchestrator | 2026-01-10 14:50:15.129668 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-01-10 14:50:15.129675 | orchestrator | Saturday 10 January 2026 14:49:01 +0000 (0:00:01.175) 0:00:37.634 ****** 2026-01-10 14:50:15.129681 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:15.129687 | orchestrator | 2026-01-10 14:50:15.129690 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-01-10 14:50:15.129694 | orchestrator | Saturday 10 January 2026 14:49:01 +0000 (0:00:00.136) 0:00:37.771 ****** 2026-01-10 14:50:15.129698 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:15.129701 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:15.129705 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:15.129709 | orchestrator | 2026-01-10 14:50:15.129713 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-01-10 14:50:15.129716 | orchestrator | Saturday 10 January 2026 14:49:02 +0000 (0:00:00.505) 0:00:38.277 ****** 2026-01-10 14:50:15.129720 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:50:15.129724 | orchestrator | 2026-01-10 14:50:15.129729 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-01-10 14:50:15.129735 | orchestrator | Saturday 10 January 2026 14:49:03 +0000 (0:00:00.881) 0:00:39.159 ****** 2026-01-10 14:50:15.129741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:50:15.129768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:50:15.129785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:50:15.129792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:15.129798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:15.129805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:15.129816 | orchestrator | 2026-01-10 14:50:15.129822 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-01-10 14:50:15.129828 | orchestrator | Saturday 10 January 2026 14:49:05 +0000 (0:00:02.191) 0:00:41.350 ****** 2026-01-10 14:50:15.129834 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:50:15.129839 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:50:15.129844 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:50:15.129850 | orchestrator | 2026-01-10 14:50:15.129856 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-10 14:50:15.129862 | orchestrator | Saturday 10 January 2026 14:49:05 +0000 (0:00:00.299) 0:00:41.649 ****** 2026-01-10 14:50:15.129867 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:50:15.129873 | orchestrator | 2026-01-10 14:50:15.129882 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-01-10 14:50:15.129888 | orchestrator | Saturday 10 January 2026 14:49:06 +0000 (0:00:00.746) 0:00:42.396 ****** 2026-01-10 14:50:15.129899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:50:15.129905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:50:15.129993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:50:15.130041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:15.130055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:15.130068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:15.130074 | orchestrator | 2026-01-10 14:50:15.130095 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-01-10 14:50:15.130101 | orchestrator | Saturday 10 January 2026 14:49:09 +0000 (0:00:02.587) 0:00:44.983 ****** 2026-01-10 14:50:15.130108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-10 14:50:15.130115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:50:15.130141 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:15.130149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-10 14:50:15.130162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:50:15.130169 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:15.130175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-10 14:50:15.130181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:50:15.130187 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:15.130193 | orchestrator | 2026-01-10 14:50:15.130198 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-01-10 14:50:15.130204 | orchestrator | Saturday 10 January 2026 14:49:09 +0000 (0:00:00.647) 0:00:45.631 ****** 2026-01-10 14:50:15.130214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-10 14:50:15.130220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:50:15.130228 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:15.130240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-10 14:50:15.130246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:50:15.130252 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:15.130258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-10 14:50:15.130267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:50:15.130273 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:15.130279 | orchestrator | 2026-01-10 14:50:15.130285 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-01-10 14:50:15.130292 | orchestrator | Saturday 10 January 2026 14:49:11 +0000 (0:00:01.365) 0:00:46.997 ****** 2026-01-10 14:50:15.130301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:50:15.130312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:50:15.130318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:50:15.130328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:15.130334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:15.130342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:15.130349 | orchestrator | 2026-01-10 14:50:15.130358 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-01-10 14:50:15.130365 | orchestrator | Saturday 10 January 2026 14:49:13 +0000 (0:00:02.257) 0:00:49.254 ****** 2026-01-10 14:50:15.130373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:50:15.130379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:50:15.130389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:50:15.130395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:15.130408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:15.130415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:15.130421 | orchestrator | 2026-01-10 14:50:15.130427 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-01-10 14:50:15.130440 | orchestrator | Saturday 10 January 2026 14:49:18 +0000 (0:00:04.982) 0:00:54.237 ****** 2026-01-10 14:50:15.130446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-10 14:50:15.130453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:50:15.130459 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:15.130468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-10 14:50:15.130480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:50:15.130486 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:15.130493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-10 14:50:15.130504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:50:15.130510 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:15.130516 | orchestrator | 2026-01-10 14:50:15.130522 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-01-10 14:50:15.130529 | orchestrator | Saturday 10 January 2026 14:49:19 +0000 (0:00:01.530) 0:00:55.768 ****** 2026-01-10 14:50:15.130535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:50:15.130548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:50:15.130553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:50:15.130559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:15.130564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:15.130570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:15.130577 | orchestrator | 2026-01-10 14:50:15.130587 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-10 14:50:15.130594 | orchestrator | Saturday 10 January 2026 14:49:23 +0000 (0:00:03.664) 0:00:59.433 ****** 2026-01-10 14:50:15.130599 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:15.130605 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:15.130611 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:15.130617 | orchestrator | 2026-01-10 14:50:15.130623 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-01-10 14:50:15.130631 | orchestrator | Saturday 10 January 2026 14:49:23 +0000 (0:00:00.384) 0:00:59.818 ****** 2026-01-10 14:50:15.130637 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:50:15.130643 | orchestrator | 2026-01-10 14:50:15.130649 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-01-10 14:50:15.130655 | orchestrator | Saturday 10 January 2026 14:49:26 +0000 (0:00:02.703) 0:01:02.522 ****** 2026-01-10 14:50:15.130661 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:50:15.130667 | orchestrator | 2026-01-10 14:50:15.130673 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-01-10 14:50:15.130684 | orchestrator | Saturday 10 January 2026 14:49:28 +0000 (0:00:02.235) 0:01:04.757 ****** 2026-01-10 14:50:15.130696 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:50:15.130702 | orchestrator | 2026-01-10 14:50:15.130708 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-10 14:50:15.130715 | orchestrator | Saturday 10 January 2026 14:49:44 +0000 (0:00:15.738) 0:01:20.495 ****** 2026-01-10 14:50:15.130721 | orchestrator | 2026-01-10 14:50:15.130725 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-10 14:50:15.130729 | orchestrator | Saturday 10 January 2026 14:49:44 +0000 (0:00:00.204) 0:01:20.700 ****** 2026-01-10 14:50:15.130732 | orchestrator | 2026-01-10 14:50:15.130751 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-10 14:50:15.130761 | orchestrator | Saturday 10 January 2026 14:49:45 +0000 (0:00:00.185) 0:01:20.885 ****** 2026-01-10 14:50:15.130767 | orchestrator | 2026-01-10 14:50:15.130773 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-01-10 14:50:15.130778 | orchestrator | Saturday 10 January 2026 14:49:45 +0000 (0:00:00.286) 0:01:21.171 ****** 2026-01-10 14:50:15.130784 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:50:15.130790 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:50:15.130796 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:50:15.130802 | orchestrator | 2026-01-10 14:50:15.130807 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-01-10 14:50:15.130813 | orchestrator | Saturday 10 January 2026 14:50:03 +0000 (0:00:18.225) 0:01:39.396 ****** 2026-01-10 14:50:15.130819 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:50:15.130825 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:50:15.130831 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:50:15.130836 | orchestrator | 2026-01-10 14:50:15.130842 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:50:15.130849 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-10 14:50:15.130855 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-10 14:50:15.130861 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-10 14:50:15.130868 | orchestrator | 2026-01-10 14:50:15.130874 | orchestrator | 2026-01-10 14:50:15.130880 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:50:15.130886 | orchestrator | Saturday 10 January 2026 14:50:12 +0000 (0:00:08.859) 0:01:48.256 ****** 2026-01-10 14:50:15.130892 | orchestrator | =============================================================================== 2026-01-10 14:50:15.130896 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 18.23s 2026-01-10 14:50:15.130900 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.74s 2026-01-10 14:50:15.130904 | orchestrator | magnum : Restart magnum-conductor container ----------------------------- 8.86s 2026-01-10 14:50:15.130908 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 7.20s 2026-01-10 14:50:15.130911 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 4.98s 2026-01-10 14:50:15.130915 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.74s 2026-01-10 14:50:15.130970 | orchestrator | magnum : Check magnum containers ---------------------------------------- 3.66s 2026-01-10 14:50:15.130975 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.64s 2026-01-10 14:50:15.130978 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.62s 2026-01-10 14:50:15.130982 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.51s 2026-01-10 14:50:15.130986 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.32s 2026-01-10 14:50:15.130990 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.23s 2026-01-10 14:50:15.130998 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.10s 2026-01-10 14:50:15.131002 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.09s 2026-01-10 14:50:15.131005 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.71s 2026-01-10 14:50:15.131009 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.59s 2026-01-10 14:50:15.131013 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.26s 2026-01-10 14:50:15.131017 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.23s 2026-01-10 14:50:15.131021 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.19s 2026-01-10 14:50:15.131024 | orchestrator | magnum : Copying over existing policy file ------------------------------ 1.53s 2026-01-10 14:50:15.131028 | orchestrator | 2026-01-10 14:50:15 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:50:15.131035 | orchestrator | 2026-01-10 14:50:15 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:50:15.131231 | orchestrator | 2026-01-10 14:50:15 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:50:15.132761 | orchestrator | 2026-01-10 14:50:15 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:50:15.132800 | orchestrator | 2026-01-10 14:50:15 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:18.178688 | orchestrator | 2026-01-10 14:50:18 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:50:18.178736 | orchestrator | 2026-01-10 14:50:18 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:50:18.178741 | orchestrator | 2026-01-10 14:50:18 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:50:18.179449 | orchestrator | 2026-01-10 14:50:18 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:50:18.179468 | orchestrator | 2026-01-10 14:50:18 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:21.228205 | orchestrator | 2026-01-10 14:50:21 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:50:21.230306 | orchestrator | 2026-01-10 14:50:21 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:50:21.233335 | orchestrator | 2026-01-10 14:50:21 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:50:21.235113 | orchestrator | 2026-01-10 14:50:21 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:50:21.235570 | orchestrator | 2026-01-10 14:50:21 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:24.281939 | orchestrator | 2026-01-10 14:50:24 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:50:24.282754 | orchestrator | 2026-01-10 14:50:24 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:50:24.284259 | orchestrator | 2026-01-10 14:50:24 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:50:24.285236 | orchestrator | 2026-01-10 14:50:24 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:50:24.285317 | orchestrator | 2026-01-10 14:50:24 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:27.327762 | orchestrator | 2026-01-10 14:50:27 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:50:27.329341 | orchestrator | 2026-01-10 14:50:27 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:50:27.331830 | orchestrator | 2026-01-10 14:50:27 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:50:27.334504 | orchestrator | 2026-01-10 14:50:27 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:50:27.334552 | orchestrator | 2026-01-10 14:50:27 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:30.388205 | orchestrator | 2026-01-10 14:50:30 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:50:30.390108 | orchestrator | 2026-01-10 14:50:30 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:50:30.391828 | orchestrator | 2026-01-10 14:50:30 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state STARTED 2026-01-10 14:50:30.393940 | orchestrator | 2026-01-10 14:50:30 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:50:30.394149 | orchestrator | 2026-01-10 14:50:30 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:33.440736 | orchestrator | 2026-01-10 14:50:33 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:50:33.442181 | orchestrator | 2026-01-10 14:50:33 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:50:33.443169 | orchestrator | 2026-01-10 14:50:33 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:50:33.446072 | orchestrator | 2026-01-10 14:50:33 | INFO  | Task 357e43dc-d06c-484d-ba54-c5ba1b41351b is in state SUCCESS 2026-01-10 14:50:33.446285 | orchestrator | 2026-01-10 14:50:33.447673 | orchestrator | 2026-01-10 14:50:33.447720 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:50:33.447729 | orchestrator | 2026-01-10 14:50:33.447736 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:50:33.447743 | orchestrator | Saturday 10 January 2026 14:46:10 +0000 (0:00:00.248) 0:00:00.248 ****** 2026-01-10 14:50:33.447749 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:50:33.447772 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:50:33.447781 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:50:33.447785 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:50:33.447789 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:50:33.447793 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:50:33.447797 | orchestrator | 2026-01-10 14:50:33.447801 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:50:33.447805 | orchestrator | Saturday 10 January 2026 14:46:10 +0000 (0:00:00.821) 0:00:01.069 ****** 2026-01-10 14:50:33.447809 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-01-10 14:50:33.447814 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-01-10 14:50:33.447818 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-01-10 14:50:33.447821 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-01-10 14:50:33.447825 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-01-10 14:50:33.447829 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-01-10 14:50:33.447833 | orchestrator | 2026-01-10 14:50:33.447836 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-01-10 14:50:33.447840 | orchestrator | 2026-01-10 14:50:33.447844 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-10 14:50:33.447848 | orchestrator | Saturday 10 January 2026 14:46:11 +0000 (0:00:00.719) 0:00:01.789 ****** 2026-01-10 14:50:33.447853 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:50:33.447858 | orchestrator | 2026-01-10 14:50:33.447862 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-01-10 14:50:33.447866 | orchestrator | Saturday 10 January 2026 14:46:12 +0000 (0:00:01.139) 0:00:02.928 ****** 2026-01-10 14:50:33.447891 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:50:33.447944 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:50:33.447948 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:50:33.447952 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:50:33.447956 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:50:33.447960 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:50:33.447963 | orchestrator | 2026-01-10 14:50:33.447967 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-01-10 14:50:33.447971 | orchestrator | Saturday 10 January 2026 14:46:14 +0000 (0:00:01.288) 0:00:04.217 ****** 2026-01-10 14:50:33.447977 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:50:33.447986 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:50:33.447996 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:50:33.448001 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:50:33.448007 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:50:33.448013 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:50:33.448119 | orchestrator | 2026-01-10 14:50:33.448128 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-01-10 14:50:33.448134 | orchestrator | Saturday 10 January 2026 14:46:15 +0000 (0:00:00.960) 0:00:05.177 ****** 2026-01-10 14:50:33.448141 | orchestrator | ok: [testbed-node-0] => { 2026-01-10 14:50:33.448148 | orchestrator |  "changed": false, 2026-01-10 14:50:33.448154 | orchestrator |  "msg": "All assertions passed" 2026-01-10 14:50:33.448160 | orchestrator | } 2026-01-10 14:50:33.448166 | orchestrator | ok: [testbed-node-1] => { 2026-01-10 14:50:33.448171 | orchestrator |  "changed": false, 2026-01-10 14:50:33.448177 | orchestrator |  "msg": "All assertions passed" 2026-01-10 14:50:33.448183 | orchestrator | } 2026-01-10 14:50:33.448189 | orchestrator | ok: [testbed-node-2] => { 2026-01-10 14:50:33.448195 | orchestrator |  "changed": false, 2026-01-10 14:50:33.448201 | orchestrator |  "msg": "All assertions passed" 2026-01-10 14:50:33.448724 | orchestrator | } 2026-01-10 14:50:33.448757 | orchestrator | ok: [testbed-node-3] => { 2026-01-10 14:50:33.448764 | orchestrator |  "changed": false, 2026-01-10 14:50:33.448770 | orchestrator |  "msg": "All assertions passed" 2026-01-10 14:50:33.448776 | orchestrator | } 2026-01-10 14:50:33.448782 | orchestrator | ok: [testbed-node-4] => { 2026-01-10 14:50:33.448788 | orchestrator |  "changed": false, 2026-01-10 14:50:33.448794 | orchestrator |  "msg": "All assertions passed" 2026-01-10 14:50:33.448800 | orchestrator | } 2026-01-10 14:50:33.448806 | orchestrator | ok: [testbed-node-5] => { 2026-01-10 14:50:33.448812 | orchestrator |  "changed": false, 2026-01-10 14:50:33.448818 | orchestrator |  "msg": "All assertions passed" 2026-01-10 14:50:33.448823 | orchestrator | } 2026-01-10 14:50:33.448830 | orchestrator | 2026-01-10 14:50:33.448837 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-01-10 14:50:33.448844 | orchestrator | Saturday 10 January 2026 14:46:15 +0000 (0:00:00.832) 0:00:06.010 ****** 2026-01-10 14:50:33.448850 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:33.448856 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:33.448862 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:33.448869 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:50:33.448875 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:50:33.448882 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:50:33.448888 | orchestrator | 2026-01-10 14:50:33.448914 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-01-10 14:50:33.448921 | orchestrator | Saturday 10 January 2026 14:46:16 +0000 (0:00:00.631) 0:00:06.642 ****** 2026-01-10 14:50:33.448927 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-01-10 14:50:33.448934 | orchestrator | 2026-01-10 14:50:33.448941 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-01-10 14:50:33.448947 | orchestrator | Saturday 10 January 2026 14:46:20 +0000 (0:00:03.503) 0:00:10.146 ****** 2026-01-10 14:50:33.448951 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-01-10 14:50:33.448972 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-01-10 14:50:33.448976 | orchestrator | 2026-01-10 14:50:33.449014 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-01-10 14:50:33.449019 | orchestrator | Saturday 10 January 2026 14:46:27 +0000 (0:00:07.260) 0:00:17.406 ****** 2026-01-10 14:50:33.449022 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-10 14:50:33.449026 | orchestrator | 2026-01-10 14:50:33.449030 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-01-10 14:50:33.449034 | orchestrator | Saturday 10 January 2026 14:46:30 +0000 (0:00:03.540) 0:00:20.947 ****** 2026-01-10 14:50:33.449046 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-10 14:50:33.449050 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-01-10 14:50:33.449054 | orchestrator | 2026-01-10 14:50:33.449058 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-01-10 14:50:33.449061 | orchestrator | Saturday 10 January 2026 14:46:34 +0000 (0:00:03.662) 0:00:24.609 ****** 2026-01-10 14:50:33.449065 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-10 14:50:33.449069 | orchestrator | 2026-01-10 14:50:33.449072 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-01-10 14:50:33.449076 | orchestrator | Saturday 10 January 2026 14:46:37 +0000 (0:00:03.279) 0:00:27.888 ****** 2026-01-10 14:50:33.449080 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-01-10 14:50:33.449084 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-01-10 14:50:33.449087 | orchestrator | 2026-01-10 14:50:33.449091 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-10 14:50:33.449095 | orchestrator | Saturday 10 January 2026 14:46:44 +0000 (0:00:07.222) 0:00:35.111 ****** 2026-01-10 14:50:33.449099 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:33.449102 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:33.449106 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:33.449110 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:50:33.449114 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:50:33.449117 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:50:33.449121 | orchestrator | 2026-01-10 14:50:33.449124 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-01-10 14:50:33.449128 | orchestrator | Saturday 10 January 2026 14:46:45 +0000 (0:00:00.697) 0:00:35.809 ****** 2026-01-10 14:50:33.449132 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:33.449135 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:50:33.449139 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:33.449143 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:33.449147 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:50:33.449150 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:50:33.449154 | orchestrator | 2026-01-10 14:50:33.449158 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-01-10 14:50:33.449161 | orchestrator | Saturday 10 January 2026 14:46:47 +0000 (0:00:02.004) 0:00:37.813 ****** 2026-01-10 14:50:33.449165 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:50:33.449169 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:50:33.449173 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:50:33.449176 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:50:33.449180 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:50:33.449184 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:50:33.449187 | orchestrator | 2026-01-10 14:50:33.449191 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-01-10 14:50:33.449195 | orchestrator | Saturday 10 January 2026 14:46:48 +0000 (0:00:01.058) 0:00:38.871 ****** 2026-01-10 14:50:33.449198 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:33.449202 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:33.449206 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:50:33.449214 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:33.449218 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:50:33.449221 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:50:33.449225 | orchestrator | 2026-01-10 14:50:33.449229 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-01-10 14:50:33.449233 | orchestrator | Saturday 10 January 2026 14:46:51 +0000 (0:00:02.823) 0:00:41.695 ****** 2026-01-10 14:50:33.449240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:50:33.449266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:50:33.449272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:50:33.449277 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:50:33.449283 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:50:33.449290 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:50:33.449294 | orchestrator | 2026-01-10 14:50:33.449298 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-01-10 14:50:33.449302 | orchestrator | Saturday 10 January 2026 14:46:54 +0000 (0:00:03.077) 0:00:44.773 ****** 2026-01-10 14:50:33.449306 | orchestrator | [WARNING]: Skipped 2026-01-10 14:50:33.449310 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-01-10 14:50:33.449315 | orchestrator | due to this access issue: 2026-01-10 14:50:33.449319 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-01-10 14:50:33.449323 | orchestrator | a directory 2026-01-10 14:50:33.449327 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:50:33.449332 | orchestrator | 2026-01-10 14:50:33.449347 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-10 14:50:33.449352 | orchestrator | Saturday 10 January 2026 14:46:55 +0000 (0:00:00.726) 0:00:45.499 ****** 2026-01-10 14:50:33.449357 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:50:33.449364 | orchestrator | 2026-01-10 14:50:33.449371 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-01-10 14:50:33.449376 | orchestrator | Saturday 10 January 2026 14:46:56 +0000 (0:00:01.076) 0:00:46.575 ****** 2026-01-10 14:50:33.449380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:50:33.449385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:50:33.449393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:50:33.449397 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:50:33.449418 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:50:33.449424 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:50:33.449428 | orchestrator | 2026-01-10 14:50:33.449433 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-01-10 14:50:33.449441 | orchestrator | Saturday 10 January 2026 14:47:01 +0000 (0:00:04.738) 0:00:51.314 ****** 2026-01-10 14:50:33.449445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:50:33.449450 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:33.449455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:50:33.449459 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:33.449476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:50:33.449481 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:33.449488 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:50:33.449493 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:50:33.449500 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:50:33.449505 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:50:33.449510 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:50:33.449514 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:50:33.449518 | orchestrator | 2026-01-10 14:50:33.449523 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-01-10 14:50:33.449527 | orchestrator | Saturday 10 January 2026 14:47:04 +0000 (0:00:03.607) 0:00:54.921 ****** 2026-01-10 14:50:33.449532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:50:33.449536 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:33.449557 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:50:33.449562 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:50:33.449566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:50:33.449577 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:33.449581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:50:33.449588 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:33.449594 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:50:33.449601 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:50:33.449608 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:50:33.449615 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:50:33.449621 | orchestrator | 2026-01-10 14:50:33.449627 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-01-10 14:50:33.449636 | orchestrator | Saturday 10 January 2026 14:47:08 +0000 (0:00:03.402) 0:00:58.323 ****** 2026-01-10 14:50:33.449643 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:33.449649 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:33.449655 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:50:33.449662 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:33.449668 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:50:33.449674 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:50:33.449685 | orchestrator | 2026-01-10 14:50:33.449695 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-01-10 14:50:33.449700 | orchestrator | Saturday 10 January 2026 14:47:10 +0000 (0:00:02.461) 0:01:00.785 ****** 2026-01-10 14:50:33.449704 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:33.449708 | orchestrator | 2026-01-10 14:50:33.449712 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-01-10 14:50:33.449717 | orchestrator | Saturday 10 January 2026 14:47:10 +0000 (0:00:00.104) 0:01:00.890 ****** 2026-01-10 14:50:33.449722 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:33.449728 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:33.449734 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:33.449740 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:50:33.449746 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:50:33.449751 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:50:33.449756 | orchestrator | 2026-01-10 14:50:33.449761 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-01-10 14:50:33.449767 | orchestrator | Saturday 10 January 2026 14:47:11 +0000 (0:00:00.694) 0:01:01.584 ****** 2026-01-10 14:50:33.449774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:50:33.449780 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:33.449785 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:50:33.449813 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:50:33.449820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:50:33.449826 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:33.449850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:50:33.449858 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:33.449862 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:50:33.449867 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:50:33.449871 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:50:33.449874 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:50:33.449878 | orchestrator | 2026-01-10 14:50:33.449882 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-01-10 14:50:33.449886 | orchestrator | Saturday 10 January 2026 14:47:14 +0000 (0:00:03.303) 0:01:04.888 ****** 2026-01-10 14:50:33.449890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:50:33.449922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:50:33.449933 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:50:33.449938 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:50:33.449942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:50:33.449946 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:50:33.449950 | orchestrator | 2026-01-10 14:50:33.449957 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-01-10 14:50:33.449961 | orchestrator | Saturday 10 January 2026 14:47:19 +0000 (0:00:04.342) 0:01:09.231 ****** 2026-01-10 14:50:33.449968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:50:33.449976 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:50:33.449980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:50:33.449984 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:50:33.449988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:50:33.449999 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:50:33.450003 | orchestrator | 2026-01-10 14:50:33.450010 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-01-10 14:50:33.450049 | orchestrator | Saturday 10 January 2026 14:47:26 +0000 (0:00:07.693) 0:01:16.925 ****** 2026-01-10 14:50:33.450053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:50:33.450057 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:33.450061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:50:33.450065 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:33.450069 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:50:33.450076 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:50:33.450080 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:50:33.450084 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:50:33.450095 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:50:33.450099 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:50:33.450103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:50:33.450107 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:33.450111 | orchestrator | 2026-01-10 14:50:33.450115 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-01-10 14:50:33.450119 | orchestrator | Saturday 10 January 2026 14:47:30 +0000 (0:00:03.884) 0:01:20.809 ****** 2026-01-10 14:50:33.450123 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:50:33.450126 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:50:33.450130 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:50:33.450134 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:50:33.450138 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:50:33.450142 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:50:33.450145 | orchestrator | 2026-01-10 14:50:33.450149 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-01-10 14:50:33.450153 | orchestrator | Saturday 10 January 2026 14:47:34 +0000 (0:00:03.594) 0:01:24.404 ****** 2026-01-10 14:50:33.450157 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:50:33.450164 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:50:33.450168 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:50:33.450172 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:50:33.450184 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:50:33.450189 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:50:33.450193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:50:33.450197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:50:33.450207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:50:33.450211 | orchestrator | 2026-01-10 14:50:33.450214 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-01-10 14:50:33.450218 | orchestrator | Saturday 10 January 2026 14:47:39 +0000 (0:00:04.814) 0:01:29.218 ****** 2026-01-10 14:50:33.450222 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:33.450226 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:33.450229 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:33.450233 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:50:33.450237 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:50:33.450241 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:50:33.450244 | orchestrator | 2026-01-10 14:50:33.450248 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-01-10 14:50:33.450252 | orchestrator | Saturday 10 January 2026 14:47:41 +0000 (0:00:02.722) 0:01:31.940 ****** 2026-01-10 14:50:33.450256 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:33.450259 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:33.450263 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:33.450267 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:50:33.450270 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:50:33.450274 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:50:33.450278 | orchestrator | 2026-01-10 14:50:33.450281 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-01-10 14:50:33.450285 | orchestrator | Saturday 10 January 2026 14:47:44 +0000 (0:00:02.324) 0:01:34.265 ****** 2026-01-10 14:50:33.450292 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:33.450296 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:33.450300 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:50:33.450304 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:33.450307 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:50:33.450311 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:50:33.450315 | orchestrator | 2026-01-10 14:50:33.450319 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-01-10 14:50:33.450325 | orchestrator | Saturday 10 January 2026 14:47:46 +0000 (0:00:02.028) 0:01:36.294 ****** 2026-01-10 14:50:33.450329 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:33.450333 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:33.450336 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:33.450340 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:50:33.450344 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:50:33.450348 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:50:33.450351 | orchestrator | 2026-01-10 14:50:33.450355 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-01-10 14:50:33.450359 | orchestrator | Saturday 10 January 2026 14:47:48 +0000 (0:00:02.006) 0:01:38.300 ****** 2026-01-10 14:50:33.450362 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:33.450366 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:33.450373 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:33.450377 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:50:33.450381 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:50:33.450384 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:50:33.450388 | orchestrator | 2026-01-10 14:50:33.450392 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-01-10 14:50:33.450396 | orchestrator | Saturday 10 January 2026 14:47:51 +0000 (0:00:03.568) 0:01:41.869 ****** 2026-01-10 14:50:33.450399 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:33.450403 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:33.450407 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:50:33.450410 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:33.450414 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:50:33.450418 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:50:33.450422 | orchestrator | 2026-01-10 14:50:33.450425 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-01-10 14:50:33.450429 | orchestrator | Saturday 10 January 2026 14:47:53 +0000 (0:00:01.925) 0:01:43.794 ****** 2026-01-10 14:50:33.450433 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-10 14:50:33.450437 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:33.450441 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-10 14:50:33.450444 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:33.450448 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-10 14:50:33.450452 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:33.450456 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-10 14:50:33.450459 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:50:33.450463 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-10 14:50:33.450467 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:50:33.450471 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-10 14:50:33.450474 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:50:33.450478 | orchestrator | 2026-01-10 14:50:33.450482 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-01-10 14:50:33.450486 | orchestrator | Saturday 10 January 2026 14:47:55 +0000 (0:00:01.834) 0:01:45.629 ****** 2026-01-10 14:50:33.450490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:50:33.450494 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:33.450502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:50:33.450523 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:33.450527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:50:33.450531 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:33.450535 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:50:33.450539 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:50:33.450543 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:50:33.450547 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:50:33.450551 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:50:33.450555 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:50:33.450562 | orchestrator | 2026-01-10 14:50:33.450566 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-01-10 14:50:33.450569 | orchestrator | Saturday 10 January 2026 14:47:57 +0000 (0:00:02.196) 0:01:47.826 ****** 2026-01-10 14:50:33.450582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:50:33.450589 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:33.450595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:50:33.450601 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:33.450607 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:50:33.450613 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:50:33.450620 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:50:33.450626 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:50:33.450636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:50:33.450650 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:33.450660 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:50:33.450668 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:50:33.450672 | orchestrator | 2026-01-10 14:50:33.450676 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-01-10 14:50:33.450680 | orchestrator | Saturday 10 January 2026 14:48:00 +0000 (0:00:02.864) 0:01:50.690 ****** 2026-01-10 14:50:33.450684 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:33.450688 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:33.450691 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:33.450695 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:50:33.450699 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:50:33.450702 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:50:33.450706 | orchestrator | 2026-01-10 14:50:33.450710 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-01-10 14:50:33.450714 | orchestrator | Saturday 10 January 2026 14:48:02 +0000 (0:00:02.235) 0:01:52.925 ****** 2026-01-10 14:50:33.450717 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:33.450721 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:33.450725 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:33.450729 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:50:33.450732 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:50:33.450736 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:50:33.450740 | orchestrator | 2026-01-10 14:50:33.450743 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-01-10 14:50:33.450747 | orchestrator | Saturday 10 January 2026 14:48:06 +0000 (0:00:03.418) 0:01:56.344 ****** 2026-01-10 14:50:33.450751 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:33.450755 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:33.450759 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:50:33.450762 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:33.450766 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:50:33.450770 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:50:33.450774 | orchestrator | 2026-01-10 14:50:33.450778 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-01-10 14:50:33.450781 | orchestrator | Saturday 10 January 2026 14:48:09 +0000 (0:00:03.111) 0:01:59.455 ****** 2026-01-10 14:50:33.450785 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:33.450789 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:33.450797 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:33.450801 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:50:33.450805 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:50:33.450809 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:50:33.450812 | orchestrator | 2026-01-10 14:50:33.450816 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-01-10 14:50:33.450820 | orchestrator | Saturday 10 January 2026 14:48:12 +0000 (0:00:02.884) 0:02:02.340 ****** 2026-01-10 14:50:33.450824 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:33.450828 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:33.450832 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:33.450836 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:50:33.450839 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:50:33.450843 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:50:33.450848 | orchestrator | 2026-01-10 14:50:33.450854 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-01-10 14:50:33.450860 | orchestrator | Saturday 10 January 2026 14:48:14 +0000 (0:00:02.065) 0:02:04.405 ****** 2026-01-10 14:50:33.450866 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:33.450872 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:33.450877 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:33.450883 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:50:33.450889 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:50:33.450932 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:50:33.450939 | orchestrator | 2026-01-10 14:50:33.450945 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-01-10 14:50:33.450952 | orchestrator | Saturday 10 January 2026 14:48:16 +0000 (0:00:02.144) 0:02:06.550 ****** 2026-01-10 14:50:33.450958 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:33.450965 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:33.450968 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:50:33.450972 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:33.450976 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:50:33.450979 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:50:33.450983 | orchestrator | 2026-01-10 14:50:33.450987 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-01-10 14:50:33.450991 | orchestrator | Saturday 10 January 2026 14:48:19 +0000 (0:00:03.519) 0:02:10.070 ****** 2026-01-10 14:50:33.450994 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:50:33.450998 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:33.451002 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:50:33.451005 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:33.451009 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:33.451013 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:50:33.451016 | orchestrator | 2026-01-10 14:50:33.451020 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-01-10 14:50:33.451028 | orchestrator | Saturday 10 January 2026 14:48:22 +0000 (0:00:02.510) 0:02:12.580 ****** 2026-01-10 14:50:33.451032 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:33.451036 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:50:33.451039 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:33.451043 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:50:33.451047 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:33.451053 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:50:33.451057 | orchestrator | 2026-01-10 14:50:33.451061 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-01-10 14:50:33.451065 | orchestrator | Saturday 10 January 2026 14:48:24 +0000 (0:00:01.895) 0:02:14.476 ****** 2026-01-10 14:50:33.451069 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-10 14:50:33.451074 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:33.451077 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-10 14:50:33.451085 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:33.451089 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-10 14:50:33.451093 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:50:33.451097 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-10 14:50:33.451100 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:50:33.451104 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-10 14:50:33.451108 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:50:33.451111 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-10 14:50:33.451115 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:33.451119 | orchestrator | 2026-01-10 14:50:33.451123 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-01-10 14:50:33.451126 | orchestrator | Saturday 10 January 2026 14:48:26 +0000 (0:00:02.245) 0:02:16.721 ****** 2026-01-10 14:50:33.451131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:50:33.451135 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:33.451138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:50:33.451142 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:33.451149 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:50:33.451153 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:50:33.451159 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:50:33.451166 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:50:33.451170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:50:33.451174 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:33.451178 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:50:33.451182 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:50:33.451185 | orchestrator | 2026-01-10 14:50:33.451189 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-01-10 14:50:33.451193 | orchestrator | Saturday 10 January 2026 14:48:29 +0000 (0:00:02.827) 0:02:19.548 ****** 2026-01-10 14:50:33.451197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:50:33.451206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:50:33.451215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:50:33.451220 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:50:33.451224 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:50:33.451227 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:50:33.451231 | orchestrator | 2026-01-10 14:50:33.451235 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-10 14:50:33.451243 | orchestrator | Saturday 10 January 2026 14:48:33 +0000 (0:00:03.644) 0:02:23.193 ****** 2026-01-10 14:50:33.451247 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:33.451251 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:33.451255 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:33.451258 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:50:33.451262 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:50:33.451268 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:50:33.451272 | orchestrator | 2026-01-10 14:50:33.451276 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-01-10 14:50:33.451280 | orchestrator | Saturday 10 January 2026 14:48:33 +0000 (0:00:00.528) 0:02:23.721 ****** 2026-01-10 14:50:33.451284 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:50:33.451287 | orchestrator | 2026-01-10 14:50:33.451293 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-01-10 14:50:33.451297 | orchestrator | Saturday 10 January 2026 14:48:36 +0000 (0:00:02.715) 0:02:26.436 ****** 2026-01-10 14:50:33.451301 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:50:33.451304 | orchestrator | 2026-01-10 14:50:33.451308 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-01-10 14:50:33.451312 | orchestrator | Saturday 10 January 2026 14:48:38 +0000 (0:00:02.461) 0:02:28.898 ****** 2026-01-10 14:50:33.451316 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:50:33.451319 | orchestrator | 2026-01-10 14:50:33.451323 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-10 14:50:33.451327 | orchestrator | Saturday 10 January 2026 14:49:17 +0000 (0:00:38.392) 0:03:07.291 ****** 2026-01-10 14:50:33.451330 | orchestrator | 2026-01-10 14:50:33.451334 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-10 14:50:33.451338 | orchestrator | Saturday 10 January 2026 14:49:17 +0000 (0:00:00.141) 0:03:07.432 ****** 2026-01-10 14:50:33.451342 | orchestrator | 2026-01-10 14:50:33.451346 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-10 14:50:33.451349 | orchestrator | Saturday 10 January 2026 14:49:17 +0000 (0:00:00.342) 0:03:07.775 ****** 2026-01-10 14:50:33.451353 | orchestrator | 2026-01-10 14:50:33.451357 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-10 14:50:33.451361 | orchestrator | Saturday 10 January 2026 14:49:17 +0000 (0:00:00.066) 0:03:07.842 ****** 2026-01-10 14:50:33.451364 | orchestrator | 2026-01-10 14:50:33.451368 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-10 14:50:33.451372 | orchestrator | Saturday 10 January 2026 14:49:17 +0000 (0:00:00.065) 0:03:07.907 ****** 2026-01-10 14:50:33.451376 | orchestrator | 2026-01-10 14:50:33.451380 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-10 14:50:33.451383 | orchestrator | Saturday 10 January 2026 14:49:17 +0000 (0:00:00.068) 0:03:07.976 ****** 2026-01-10 14:50:33.451387 | orchestrator | 2026-01-10 14:50:33.451391 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-01-10 14:50:33.451394 | orchestrator | Saturday 10 January 2026 14:49:17 +0000 (0:00:00.066) 0:03:08.043 ****** 2026-01-10 14:50:33.451398 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:50:33.451402 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:50:33.451405 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:50:33.451409 | orchestrator | 2026-01-10 14:50:33.451413 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-01-10 14:50:33.451416 | orchestrator | Saturday 10 January 2026 14:49:40 +0000 (0:00:22.345) 0:03:30.388 ****** 2026-01-10 14:50:33.451421 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:50:33.451424 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:50:33.451428 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:50:33.451432 | orchestrator | 2026-01-10 14:50:33.451435 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:50:33.451440 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-10 14:50:33.451448 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-01-10 14:50:33.451452 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-01-10 14:50:33.451456 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-10 14:50:33.451460 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-10 14:50:33.451463 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-10 14:50:33.451467 | orchestrator | 2026-01-10 14:50:33.451471 | orchestrator | 2026-01-10 14:50:33.451475 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:50:33.451478 | orchestrator | Saturday 10 January 2026 14:50:30 +0000 (0:00:50.573) 0:04:20.961 ****** 2026-01-10 14:50:33.451482 | orchestrator | =============================================================================== 2026-01-10 14:50:33.451486 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 50.57s 2026-01-10 14:50:33.451490 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 38.39s 2026-01-10 14:50:33.451493 | orchestrator | neutron : Restart neutron-server container ----------------------------- 22.35s 2026-01-10 14:50:33.451497 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.69s 2026-01-10 14:50:33.451501 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.26s 2026-01-10 14:50:33.451504 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.22s 2026-01-10 14:50:33.451508 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.81s 2026-01-10 14:50:33.451511 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.74s 2026-01-10 14:50:33.451518 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.34s 2026-01-10 14:50:33.451522 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.88s 2026-01-10 14:50:33.451525 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.66s 2026-01-10 14:50:33.451532 | orchestrator | neutron : Check neutron containers -------------------------------------- 3.64s 2026-01-10 14:50:33.451536 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 3.61s 2026-01-10 14:50:33.451539 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.59s 2026-01-10 14:50:33.451543 | orchestrator | neutron : Copying over eswitchd.conf ------------------------------------ 3.57s 2026-01-10 14:50:33.451547 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.54s 2026-01-10 14:50:33.451550 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 3.52s 2026-01-10 14:50:33.451554 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.50s 2026-01-10 14:50:33.451558 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.42s 2026-01-10 14:50:33.451561 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.40s 2026-01-10 14:50:33.451565 | orchestrator | 2026-01-10 14:50:33 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:50:33.451569 | orchestrator | 2026-01-10 14:50:33 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:36.486315 | orchestrator | 2026-01-10 14:50:36 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:50:36.486415 | orchestrator | 2026-01-10 14:50:36 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:50:36.486882 | orchestrator | 2026-01-10 14:50:36 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:50:36.487463 | orchestrator | 2026-01-10 14:50:36 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:50:36.487487 | orchestrator | 2026-01-10 14:50:36 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:39.513756 | orchestrator | 2026-01-10 14:50:39 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:50:39.514045 | orchestrator | 2026-01-10 14:50:39 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:50:39.514670 | orchestrator | 2026-01-10 14:50:39 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:50:39.515343 | orchestrator | 2026-01-10 14:50:39 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:50:39.515367 | orchestrator | 2026-01-10 14:50:39 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:42.535927 | orchestrator | 2026-01-10 14:50:42 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:50:42.536010 | orchestrator | 2026-01-10 14:50:42 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:50:42.536516 | orchestrator | 2026-01-10 14:50:42 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:50:42.537954 | orchestrator | 2026-01-10 14:50:42 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:50:42.537991 | orchestrator | 2026-01-10 14:50:42 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:45.573313 | orchestrator | 2026-01-10 14:50:45 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:50:45.573934 | orchestrator | 2026-01-10 14:50:45 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:50:45.576587 | orchestrator | 2026-01-10 14:50:45 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:50:45.578758 | orchestrator | 2026-01-10 14:50:45 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:50:45.578816 | orchestrator | 2026-01-10 14:50:45 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:48.608665 | orchestrator | 2026-01-10 14:50:48 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:50:48.609284 | orchestrator | 2026-01-10 14:50:48 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:50:48.610124 | orchestrator | 2026-01-10 14:50:48 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:50:48.611342 | orchestrator | 2026-01-10 14:50:48 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:50:48.611380 | orchestrator | 2026-01-10 14:50:48 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:51.640564 | orchestrator | 2026-01-10 14:50:51 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:50:51.640751 | orchestrator | 2026-01-10 14:50:51 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:50:51.641505 | orchestrator | 2026-01-10 14:50:51 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:50:51.642126 | orchestrator | 2026-01-10 14:50:51 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:50:51.642321 | orchestrator | 2026-01-10 14:50:51 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:54.668215 | orchestrator | 2026-01-10 14:50:54 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:50:54.668526 | orchestrator | 2026-01-10 14:50:54 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:50:54.669145 | orchestrator | 2026-01-10 14:50:54 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:50:54.670568 | orchestrator | 2026-01-10 14:50:54 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:50:54.670588 | orchestrator | 2026-01-10 14:50:54 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:57.711917 | orchestrator | 2026-01-10 14:50:57 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:50:57.712568 | orchestrator | 2026-01-10 14:50:57 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:50:57.713054 | orchestrator | 2026-01-10 14:50:57 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:50:57.714107 | orchestrator | 2026-01-10 14:50:57 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:50:57.714132 | orchestrator | 2026-01-10 14:50:57 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:00.753552 | orchestrator | 2026-01-10 14:51:00 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:51:00.753638 | orchestrator | 2026-01-10 14:51:00 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:51:00.755306 | orchestrator | 2026-01-10 14:51:00 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:51:00.756162 | orchestrator | 2026-01-10 14:51:00 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:51:00.756214 | orchestrator | 2026-01-10 14:51:00 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:03.785650 | orchestrator | 2026-01-10 14:51:03 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:51:03.785731 | orchestrator | 2026-01-10 14:51:03 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:51:03.786000 | orchestrator | 2026-01-10 14:51:03 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:51:03.787086 | orchestrator | 2026-01-10 14:51:03 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:51:03.787115 | orchestrator | 2026-01-10 14:51:03 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:06.820886 | orchestrator | 2026-01-10 14:51:06 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:51:06.821290 | orchestrator | 2026-01-10 14:51:06 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:51:06.823077 | orchestrator | 2026-01-10 14:51:06 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:51:06.823769 | orchestrator | 2026-01-10 14:51:06 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:51:06.823879 | orchestrator | 2026-01-10 14:51:06 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:09.867463 | orchestrator | 2026-01-10 14:51:09 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:51:09.868162 | orchestrator | 2026-01-10 14:51:09 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:51:09.869042 | orchestrator | 2026-01-10 14:51:09 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:51:09.869761 | orchestrator | 2026-01-10 14:51:09 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:51:09.869838 | orchestrator | 2026-01-10 14:51:09 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:12.901754 | orchestrator | 2026-01-10 14:51:12 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:51:12.904629 | orchestrator | 2026-01-10 14:51:12 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:51:12.904691 | orchestrator | 2026-01-10 14:51:12 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:51:12.904712 | orchestrator | 2026-01-10 14:51:12 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:51:12.904719 | orchestrator | 2026-01-10 14:51:12 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:15.928617 | orchestrator | 2026-01-10 14:51:15 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:51:15.928723 | orchestrator | 2026-01-10 14:51:15 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:51:15.929569 | orchestrator | 2026-01-10 14:51:15 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:51:15.930522 | orchestrator | 2026-01-10 14:51:15 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:51:15.930627 | orchestrator | 2026-01-10 14:51:15 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:18.957696 | orchestrator | 2026-01-10 14:51:18 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:51:18.957765 | orchestrator | 2026-01-10 14:51:18 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:51:18.958357 | orchestrator | 2026-01-10 14:51:18 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:51:18.958997 | orchestrator | 2026-01-10 14:51:18 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:51:18.959012 | orchestrator | 2026-01-10 14:51:18 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:21.988392 | orchestrator | 2026-01-10 14:51:21 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:51:21.988876 | orchestrator | 2026-01-10 14:51:21 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:51:21.991065 | orchestrator | 2026-01-10 14:51:21 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:51:21.991755 | orchestrator | 2026-01-10 14:51:21 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:51:21.991787 | orchestrator | 2026-01-10 14:51:21 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:25.044348 | orchestrator | 2026-01-10 14:51:25 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:51:25.045104 | orchestrator | 2026-01-10 14:51:25 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:51:25.045394 | orchestrator | 2026-01-10 14:51:25 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:51:25.046082 | orchestrator | 2026-01-10 14:51:25 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:51:25.046134 | orchestrator | 2026-01-10 14:51:25 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:28.071097 | orchestrator | 2026-01-10 14:51:28 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:51:28.071393 | orchestrator | 2026-01-10 14:51:28 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:51:28.072374 | orchestrator | 2026-01-10 14:51:28 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:51:28.072709 | orchestrator | 2026-01-10 14:51:28 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:51:28.072943 | orchestrator | 2026-01-10 14:51:28 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:31.096501 | orchestrator | 2026-01-10 14:51:31 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:51:31.096640 | orchestrator | 2026-01-10 14:51:31 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:51:31.097324 | orchestrator | 2026-01-10 14:51:31 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:51:31.098558 | orchestrator | 2026-01-10 14:51:31 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:51:31.098609 | orchestrator | 2026-01-10 14:51:31 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:34.158702 | orchestrator | 2026-01-10 14:51:34 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:51:34.158899 | orchestrator | 2026-01-10 14:51:34 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:51:34.159703 | orchestrator | 2026-01-10 14:51:34 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:51:34.160524 | orchestrator | 2026-01-10 14:51:34 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:51:34.160552 | orchestrator | 2026-01-10 14:51:34 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:37.207325 | orchestrator | 2026-01-10 14:51:37 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:51:37.207620 | orchestrator | 2026-01-10 14:51:37 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:51:37.208236 | orchestrator | 2026-01-10 14:51:37 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:51:37.208782 | orchestrator | 2026-01-10 14:51:37 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:51:37.208825 | orchestrator | 2026-01-10 14:51:37 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:40.233850 | orchestrator | 2026-01-10 14:51:40 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:51:40.234524 | orchestrator | 2026-01-10 14:51:40 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:51:40.235348 | orchestrator | 2026-01-10 14:51:40 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:51:40.236045 | orchestrator | 2026-01-10 14:51:40 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:51:40.236688 | orchestrator | 2026-01-10 14:51:40 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:43.273116 | orchestrator | 2026-01-10 14:51:43 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:51:43.274378 | orchestrator | 2026-01-10 14:51:43 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:51:43.275357 | orchestrator | 2026-01-10 14:51:43 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:51:43.276415 | orchestrator | 2026-01-10 14:51:43 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:51:43.276436 | orchestrator | 2026-01-10 14:51:43 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:46.325739 | orchestrator | 2026-01-10 14:51:46 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:51:46.326666 | orchestrator | 2026-01-10 14:51:46 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:51:46.327877 | orchestrator | 2026-01-10 14:51:46 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:51:46.329525 | orchestrator | 2026-01-10 14:51:46 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:51:46.329576 | orchestrator | 2026-01-10 14:51:46 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:49.362497 | orchestrator | 2026-01-10 14:51:49 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:51:49.365078 | orchestrator | 2026-01-10 14:51:49 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:51:49.365159 | orchestrator | 2026-01-10 14:51:49 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:51:49.366272 | orchestrator | 2026-01-10 14:51:49 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:51:49.366306 | orchestrator | 2026-01-10 14:51:49 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:52.406598 | orchestrator | 2026-01-10 14:51:52 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:51:52.411557 | orchestrator | 2026-01-10 14:51:52 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:51:52.413854 | orchestrator | 2026-01-10 14:51:52 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:51:52.415344 | orchestrator | 2026-01-10 14:51:52 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:51:52.415386 | orchestrator | 2026-01-10 14:51:52 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:55.465212 | orchestrator | 2026-01-10 14:51:55 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:51:55.467402 | orchestrator | 2026-01-10 14:51:55 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:51:55.468848 | orchestrator | 2026-01-10 14:51:55 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:51:55.470464 | orchestrator | 2026-01-10 14:51:55 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:51:55.470766 | orchestrator | 2026-01-10 14:51:55 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:58.527252 | orchestrator | 2026-01-10 14:51:58 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:51:58.528673 | orchestrator | 2026-01-10 14:51:58 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:51:58.530511 | orchestrator | 2026-01-10 14:51:58 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:51:58.532000 | orchestrator | 2026-01-10 14:51:58 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:51:58.532124 | orchestrator | 2026-01-10 14:51:58 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:01.582539 | orchestrator | 2026-01-10 14:52:01 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:52:01.585671 | orchestrator | 2026-01-10 14:52:01 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:52:01.587457 | orchestrator | 2026-01-10 14:52:01 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:52:01.589678 | orchestrator | 2026-01-10 14:52:01 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:52:01.589758 | orchestrator | 2026-01-10 14:52:01 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:04.629995 | orchestrator | 2026-01-10 14:52:04 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:52:04.631157 | orchestrator | 2026-01-10 14:52:04 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:52:04.633328 | orchestrator | 2026-01-10 14:52:04 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:52:04.634636 | orchestrator | 2026-01-10 14:52:04 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:52:04.635396 | orchestrator | 2026-01-10 14:52:04 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:07.686978 | orchestrator | 2026-01-10 14:52:07 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:52:07.687579 | orchestrator | 2026-01-10 14:52:07 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:52:07.688123 | orchestrator | 2026-01-10 14:52:07 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:52:07.689621 | orchestrator | 2026-01-10 14:52:07 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:52:07.689682 | orchestrator | 2026-01-10 14:52:07 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:10.725065 | orchestrator | 2026-01-10 14:52:10 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:52:10.726900 | orchestrator | 2026-01-10 14:52:10 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:52:10.728992 | orchestrator | 2026-01-10 14:52:10 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:52:10.729975 | orchestrator | 2026-01-10 14:52:10 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:52:10.729999 | orchestrator | 2026-01-10 14:52:10 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:13.778374 | orchestrator | 2026-01-10 14:52:13 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:52:13.779400 | orchestrator | 2026-01-10 14:52:13 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:52:13.780991 | orchestrator | 2026-01-10 14:52:13 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:52:13.782856 | orchestrator | 2026-01-10 14:52:13 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:52:13.782932 | orchestrator | 2026-01-10 14:52:13 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:16.826513 | orchestrator | 2026-01-10 14:52:16 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:52:16.828439 | orchestrator | 2026-01-10 14:52:16 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:52:16.831495 | orchestrator | 2026-01-10 14:52:16 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:52:16.832218 | orchestrator | 2026-01-10 14:52:16 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:52:16.832416 | orchestrator | 2026-01-10 14:52:16 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:19.886643 | orchestrator | 2026-01-10 14:52:19 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:52:19.890587 | orchestrator | 2026-01-10 14:52:19 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:52:19.894998 | orchestrator | 2026-01-10 14:52:19 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:52:19.897001 | orchestrator | 2026-01-10 14:52:19 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:52:19.897119 | orchestrator | 2026-01-10 14:52:19 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:22.930245 | orchestrator | 2026-01-10 14:52:22 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:52:22.930798 | orchestrator | 2026-01-10 14:52:22 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:52:22.935452 | orchestrator | 2026-01-10 14:52:22 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:52:22.938219 | orchestrator | 2026-01-10 14:52:22 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:52:22.939090 | orchestrator | 2026-01-10 14:52:22 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:25.982504 | orchestrator | 2026-01-10 14:52:25 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:52:25.983856 | orchestrator | 2026-01-10 14:52:25 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:52:25.985497 | orchestrator | 2026-01-10 14:52:25 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:52:25.987294 | orchestrator | 2026-01-10 14:52:25 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:52:25.987475 | orchestrator | 2026-01-10 14:52:25 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:29.070248 | orchestrator | 2026-01-10 14:52:29 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:52:29.071505 | orchestrator | 2026-01-10 14:52:29 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state STARTED 2026-01-10 14:52:29.072088 | orchestrator | 2026-01-10 14:52:29 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:52:29.072798 | orchestrator | 2026-01-10 14:52:29 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:52:29.072827 | orchestrator | 2026-01-10 14:52:29 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:32.108689 | orchestrator | 2026-01-10 14:52:32 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:52:32.110866 | orchestrator | 2026-01-10 14:52:32.110906 | orchestrator | 2026-01-10 14:52:32 | INFO  | Task a94b7b03-c0a5-4d57-aa04-80b708ece6d9 is in state SUCCESS 2026-01-10 14:52:32.111778 | orchestrator | 2026-01-10 14:52:32.111810 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:52:32.111816 | orchestrator | 2026-01-10 14:52:32.111820 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:52:32.111824 | orchestrator | Saturday 10 January 2026 14:49:16 +0000 (0:00:00.301) 0:00:00.301 ****** 2026-01-10 14:52:32.111828 | orchestrator | ok: [testbed-manager] 2026-01-10 14:52:32.111833 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:52:32.111837 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:52:32.111841 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:52:32.111844 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:52:32.111848 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:52:32.111852 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:52:32.111856 | orchestrator | 2026-01-10 14:52:32.111859 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:52:32.111863 | orchestrator | Saturday 10 January 2026 14:49:17 +0000 (0:00:00.968) 0:00:01.269 ****** 2026-01-10 14:52:32.111868 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-01-10 14:52:32.111897 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-01-10 14:52:32.111902 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-01-10 14:52:32.111906 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-01-10 14:52:32.111910 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-01-10 14:52:32.111914 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-01-10 14:52:32.111918 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-01-10 14:52:32.111921 | orchestrator | 2026-01-10 14:52:32.111925 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-01-10 14:52:32.111929 | orchestrator | 2026-01-10 14:52:32.111933 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-10 14:52:32.111936 | orchestrator | Saturday 10 January 2026 14:49:17 +0000 (0:00:00.818) 0:00:02.088 ****** 2026-01-10 14:52:32.111941 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:52:32.111945 | orchestrator | 2026-01-10 14:52:32.111955 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-01-10 14:52:32.111972 | orchestrator | Saturday 10 January 2026 14:49:21 +0000 (0:00:03.243) 0:00:05.332 ****** 2026-01-10 14:52:32.111978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:52:32.111984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:52:32.111994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:52:32.111999 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-10 14:52:32.112019 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:52:32.112027 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:52:32.112031 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:52:32.112037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:52:32.112041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:52:32.112045 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:52:32.112049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:52:32.112056 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:52:32.112064 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:52:32.112068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:52:32.112074 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:52:32.112078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:52:32.112083 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-10 14:52:32.112087 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:52:32.112109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:52:32.112116 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-10 14:52:32.112131 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-10 14:52:32.112159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:52:32.112164 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:52:32.112168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:52:32.112172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:52:32.112176 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-10 14:52:32.112186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:52:32.112190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:52:32.112194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:52:32.112198 | orchestrator | 2026-01-10 14:52:32.112202 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-10 14:52:32.112208 | orchestrator | Saturday 10 January 2026 14:49:24 +0000 (0:00:03.755) 0:00:09.087 ****** 2026-01-10 14:52:32.112214 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:52:32.112220 | orchestrator | 2026-01-10 14:52:32.112226 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-01-10 14:52:32.112232 | orchestrator | Saturday 10 January 2026 14:49:26 +0000 (0:00:01.187) 0:00:10.275 ****** 2026-01-10 14:52:32.112238 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-10 14:52:32.112245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:52:32.112256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:52:32.112266 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:52:32.112273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:52:32.112279 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:52:32.112285 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:52:32.112289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:52:32.112293 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:52:32.112297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:52:32.112304 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:52:32.112310 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:52:32.112314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:52:32.112318 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:52:32.112322 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:52:32.112326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:52:32.112330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:52:32.112337 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-10 14:52:32.112341 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-10 14:52:32.112366 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-10 14:52:32.112371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:52:32.112377 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-10 14:52:32.112381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:52:32.112385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:52:32.112392 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:52:32.112675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:52:32.112684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:52:32.112688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:52:32.112695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:52:32.112700 | orchestrator | 2026-01-10 14:52:32.112704 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-01-10 14:52:32.112709 | orchestrator | Saturday 10 January 2026 14:49:31 +0000 (0:00:05.301) 0:00:15.576 ****** 2026-01-10 14:52:32.112713 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-10 14:52:32.112721 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:52:32.112755 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:52:32.112774 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-10 14:52:32.112779 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:52:32.112786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:52:32.112790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:52:32.112798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:52:32.112803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:52:32.112807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:52:32.112813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:52:32.112822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:52:32.112826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:52:32.112832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:52:32.112836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:52:32.112850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:52:32.112854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:52:32.112858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:52:32.112862 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:52:32.112869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:52:32.112873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:52:32.112877 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:52:32.112881 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:52:32.112885 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:52:32.112903 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:52:32.112918 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:52:32.112922 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-10 14:52:32.112926 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:52:32.112930 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:52:32.112934 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:52:32.112941 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-10 14:52:32.112945 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:52:32.112949 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:52:32.112953 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:52:32.112966 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-10 14:52:32.112970 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:52:32.112974 | orchestrator | 2026-01-10 14:52:32.112982 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-01-10 14:52:32.112986 | orchestrator | Saturday 10 January 2026 14:49:33 +0000 (0:00:01.701) 0:00:17.277 ****** 2026-01-10 14:52:32.112997 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-10 14:52:32.113001 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:52:32.113005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:52:32.113011 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:52:32.113015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:52:32.113021 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-10 14:52:32.113028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:52:32.113032 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:52:32.113036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:52:32.113044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:52:32.113050 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:52:32.113059 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:52:32.113063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:52:32.113067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:52:32.113078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:52:32.113082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:52:32.113089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:52:32.113093 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:52:32.113097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:52:32.113101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:52:32.113132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:52:32.113136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:52:32.113142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:52:32.113146 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:52:32.113152 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:52:32.113156 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:52:32.113160 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-10 14:52:32.113164 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:52:32.113168 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:52:32.113172 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:52:32.113178 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-10 14:52:32.113185 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:52:32.113189 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:52:32.113194 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:52:32.113198 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-10 14:52:32.113206 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:52:32.113210 | orchestrator | 2026-01-10 14:52:32.113214 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-01-10 14:52:32.113223 | orchestrator | Saturday 10 January 2026 14:49:35 +0000 (0:00:02.460) 0:00:19.737 ****** 2026-01-10 14:52:32.113227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:52:32.113231 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-10 14:52:32.113237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:52:32.113244 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:52:32.113248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:52:32.113257 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:52:32.113261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:52:32.113265 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:52:32.113279 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:52:32.113288 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:52:32.113294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:52:32.113301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:52:32.113305 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:52:32.113311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:52:32.113322 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:52:32.113326 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:52:32.113330 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-10 14:52:32.113334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:52:32.113343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:52:32.113348 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-10 14:52:32.113354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:52:32.113358 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-10 14:52:32.113362 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-10 14:52:32.113366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:52:32.113374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:52:32.113378 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:52:32.113382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:52:32.113386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:52:32.113393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:52:32.113397 | orchestrator | 2026-01-10 14:52:32.113401 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-01-10 14:52:32.113405 | orchestrator | Saturday 10 January 2026 14:49:42 +0000 (0:00:06.732) 0:00:26.469 ****** 2026-01-10 14:52:32.113409 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-10 14:52:32.113413 | orchestrator | 2026-01-10 14:52:32.113417 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-01-10 14:52:32.113420 | orchestrator | Saturday 10 January 2026 14:49:43 +0000 (0:00:01.662) 0:00:28.132 ****** 2026-01-10 14:52:32.113424 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1314430, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4112272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113429 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1314430, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4112272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113438 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1314430, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4112272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113442 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1314430, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4112272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:52:32.113446 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1314430, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4112272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113453 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1314464, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4144194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113457 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1314464, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4144194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113461 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1314430, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4112272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113467 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1314464, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4144194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113473 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1314430, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4112272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113477 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1314411, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.410435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113481 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1314464, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4144194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113487 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1314411, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.410435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113491 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1314464, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4144194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113495 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1314411, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.410435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113501 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1314464, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4144194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:52:32.113511 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1314445, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.412805, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113515 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1314464, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4144194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113519 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1314411, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.410435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113525 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1314445, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.412805, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113529 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1314406, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4075537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113533 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1314411, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.410435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113539 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1314445, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.412805, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113764 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1314445, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.412805, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113774 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1314411, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.410435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113778 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1314406, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4075537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113785 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1314433, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4112272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113789 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1314411, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.410435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:52:32.113797 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1314406, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4075537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113801 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1314445, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.412805, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113807 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1314445, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.412805, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113812 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1314433, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4112272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113815 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1314406, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4075537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113821 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1314433, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4112272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113826 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1314433, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4112272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113832 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1314406, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4075537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113836 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1314406, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4075537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113840 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1314443, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4125519, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113846 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1314443, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4125519, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113850 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1314443, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4125519, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113855 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1314434, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4116168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113859 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1314433, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4112272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113866 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1314443, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4125519, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113870 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1314433, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4112272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113874 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1314427, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4108353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113880 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1314434, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4116168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113884 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1314443, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4125519, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113889 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1314434, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4116168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113893 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1314434, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4116168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113900 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1314434, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4116168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113904 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1314443, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4125519, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113908 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1314427, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4108353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113914 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1314427, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4108353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113918 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314462, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4144194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113924 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1314427, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4108353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113931 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1314445, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.412805, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:52:32.113935 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314462, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4144194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113939 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314397, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4064586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113943 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314462, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4144194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113949 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1314427, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4108353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113953 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1314485, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4161053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113959 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1314459, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.414151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113967 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314462, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4144194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113971 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1314434, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4116168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113975 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314462, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4144194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113978 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314397, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4064586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113984 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314397, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4064586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113989 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314397, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4064586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.113994 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314409, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4082925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114001 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1314485, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4161053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114005 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1314485, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4161053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114008 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1314427, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4108353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114045 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1314404, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4075537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114058 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1314459, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.414151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114068 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314397, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4064586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114084 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314409, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4082925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114091 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1314459, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.414151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114097 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1314440, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4122062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114103 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1314485, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4161053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114109 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314409, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4082925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114118 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314462, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4144194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114124 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1314459, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.414151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114133 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1314485, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4161053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114141 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1314404, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4075537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114175 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1314404, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4075537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114182 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1314440, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4122062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114188 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1314459, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.414151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114198 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314409, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4082925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114205 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1314436, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.411893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114216 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1314406, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4075537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:52:32.114225 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1314440, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4122062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114232 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314397, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4064586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114238 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1314436, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.411893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114244 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1314485, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4161053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114253 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1314480, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4161053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114263 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1314404, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4075537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114270 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:52:32.114276 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314409, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4082925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114285 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1314436, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.411893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114289 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1314480, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4161053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114293 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:52:32.114297 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1314459, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.414151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114300 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1314440, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4122062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114307 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1314480, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4161053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114313 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:52:32.114317 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1314433, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4112272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:52:32.114323 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1314404, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4075537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114327 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314409, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4082925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114331 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1314436, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.411893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114335 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1314404, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4075537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114339 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1314480, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4161053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114342 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:52:32.114353 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1314440, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4122062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114357 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1314436, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.411893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114363 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1314440, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4122062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114368 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1314443, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4125519, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:52:32.114372 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1314480, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4161053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114377 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1314436, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.411893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114381 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:52:32.114385 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1314480, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4161053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:52:32.114391 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:52:32.114398 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1314434, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4116168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:52:32.114403 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1314427, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4108353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:52:32.114409 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314462, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4144194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:52:32.114414 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314397, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4064586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:52:32.114418 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1314485, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4161053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:52:32.114423 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1314459, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.414151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:52:32.114427 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314409, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4082925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:52:32.114436 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1314404, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4075537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:52:32.114440 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1314440, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4122062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:52:32.114447 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1314436, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.411893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:52:32.114451 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1314480, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4161053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:52:32.114455 | orchestrator | 2026-01-10 14:52:32.114460 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-01-10 14:52:32.114465 | orchestrator | Saturday 10 January 2026 14:50:15 +0000 (0:00:31.283) 0:00:59.415 ****** 2026-01-10 14:52:32.114469 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-10 14:52:32.114473 | orchestrator | 2026-01-10 14:52:32.114478 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-01-10 14:52:32.114482 | orchestrator | Saturday 10 January 2026 14:50:15 +0000 (0:00:00.677) 0:01:00.093 ****** 2026-01-10 14:52:32.114486 | orchestrator | [WARNING]: Skipped 2026-01-10 14:52:32.114491 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:52:32.114495 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-01-10 14:52:32.114499 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:52:32.114504 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-01-10 14:52:32.114508 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-10 14:52:32.114512 | orchestrator | [WARNING]: Skipped 2026-01-10 14:52:32.114516 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:52:32.114523 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-01-10 14:52:32.114527 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:52:32.114532 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-01-10 14:52:32.114536 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:52:32.114540 | orchestrator | [WARNING]: Skipped 2026-01-10 14:52:32.114544 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:52:32.114548 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-01-10 14:52:32.114553 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:52:32.114557 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-01-10 14:52:32.114561 | orchestrator | [WARNING]: Skipped 2026-01-10 14:52:32.114566 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:52:32.114570 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-01-10 14:52:32.114574 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:52:32.114579 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-01-10 14:52:32.114583 | orchestrator | [WARNING]: Skipped 2026-01-10 14:52:32.114588 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:52:32.114594 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-01-10 14:52:32.114599 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:52:32.114603 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-01-10 14:52:32.114607 | orchestrator | [WARNING]: Skipped 2026-01-10 14:52:32.114612 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:52:32.114616 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-01-10 14:52:32.114620 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:52:32.114624 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-01-10 14:52:32.114628 | orchestrator | [WARNING]: Skipped 2026-01-10 14:52:32.114632 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:52:32.114636 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-01-10 14:52:32.114641 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:52:32.114645 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-01-10 14:52:32.114649 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-10 14:52:32.114654 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-10 14:52:32.114657 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-10 14:52:32.114661 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-10 14:52:32.114665 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-10 14:52:32.114669 | orchestrator | 2026-01-10 14:52:32.114673 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-01-10 14:52:32.114676 | orchestrator | Saturday 10 January 2026 14:50:17 +0000 (0:00:01.793) 0:01:01.886 ****** 2026-01-10 14:52:32.114680 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-10 14:52:32.114684 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-10 14:52:32.114688 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:52:32.114692 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:52:32.114695 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-10 14:52:32.114701 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:52:32.114705 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-10 14:52:32.114711 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:52:32.114715 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-10 14:52:32.114719 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:52:32.114735 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-10 14:52:32.114742 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:52:32.114748 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-01-10 14:52:32.114755 | orchestrator | 2026-01-10 14:52:32.114759 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-01-10 14:52:32.114763 | orchestrator | Saturday 10 January 2026 14:50:31 +0000 (0:00:14.007) 0:01:15.893 ****** 2026-01-10 14:52:32.114767 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-10 14:52:32.114770 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:52:32.114774 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-10 14:52:32.114778 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:52:32.114781 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-10 14:52:32.114785 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:52:32.114789 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-10 14:52:32.114793 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:52:32.114796 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-10 14:52:32.114800 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:52:32.114804 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-10 14:52:32.114807 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:52:32.114811 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-01-10 14:52:32.114815 | orchestrator | 2026-01-10 14:52:32.114818 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-01-10 14:52:32.114822 | orchestrator | Saturday 10 January 2026 14:50:34 +0000 (0:00:02.564) 0:01:18.457 ****** 2026-01-10 14:52:32.114826 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-10 14:52:32.114830 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-01-10 14:52:32.114834 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:52:32.114838 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-10 14:52:32.114842 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-10 14:52:32.114956 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environ2026-01-10 14:52:32 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:52:32.114963 | orchestrator | ments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-10 14:52:32.114967 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:52:32.114970 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:52:32.114974 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:52:32.114978 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-10 14:52:32.114982 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:52:32.114985 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-10 14:52:32.114992 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:52:32.114996 | orchestrator | 2026-01-10 14:52:32.115000 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-01-10 14:52:32.115004 | orchestrator | Saturday 10 January 2026 14:50:36 +0000 (0:00:02.358) 0:01:20.815 ****** 2026-01-10 14:52:32.115007 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-10 14:52:32.115011 | orchestrator | 2026-01-10 14:52:32.115015 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-01-10 14:52:32.115018 | orchestrator | Saturday 10 January 2026 14:50:37 +0000 (0:00:00.856) 0:01:21.672 ****** 2026-01-10 14:52:32.115022 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:52:32.115026 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:52:32.115029 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:52:32.115033 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:52:32.115037 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:52:32.115041 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:52:32.115044 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:52:32.115048 | orchestrator | 2026-01-10 14:52:32.115052 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-01-10 14:52:32.115055 | orchestrator | Saturday 10 January 2026 14:50:38 +0000 (0:00:01.064) 0:01:22.736 ****** 2026-01-10 14:52:32.115059 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:52:32.115063 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:52:32.115069 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:52:32.115073 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:52:32.115077 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:52:32.115080 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:52:32.115084 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:52:32.115088 | orchestrator | 2026-01-10 14:52:32.115091 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-01-10 14:52:32.115095 | orchestrator | Saturday 10 January 2026 14:50:40 +0000 (0:00:02.144) 0:01:24.881 ****** 2026-01-10 14:52:32.115099 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-10 14:52:32.115102 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:52:32.115106 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-10 14:52:32.115110 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:52:32.115114 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-10 14:52:32.115117 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:52:32.115121 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-10 14:52:32.115125 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:52:32.115128 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-10 14:52:32.115132 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:52:32.115136 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-10 14:52:32.115139 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:52:32.115143 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-10 14:52:32.115147 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:52:32.115150 | orchestrator | 2026-01-10 14:52:32.115154 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-01-10 14:52:32.115158 | orchestrator | Saturday 10 January 2026 14:50:42 +0000 (0:00:01.960) 0:01:26.841 ****** 2026-01-10 14:52:32.115162 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-10 14:52:32.115165 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:52:32.115169 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-10 14:52:32.115182 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-10 14:52:32.115186 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:52:32.115190 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:52:32.115193 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-10 14:52:32.115197 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:52:32.115201 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-10 14:52:32.115205 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:52:32.115208 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-10 14:52:32.115212 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:52:32.115216 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-01-10 14:52:32.115222 | orchestrator | 2026-01-10 14:52:32.115226 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-01-10 14:52:32.115230 | orchestrator | Saturday 10 January 2026 14:50:44 +0000 (0:00:01.640) 0:01:28.482 ****** 2026-01-10 14:52:32.115233 | orchestrator | [WARNING]: Skipped 2026-01-10 14:52:32.115237 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-01-10 14:52:32.115241 | orchestrator | due to this access issue: 2026-01-10 14:52:32.115245 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-01-10 14:52:32.115249 | orchestrator | not a directory 2026-01-10 14:52:32.115252 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-10 14:52:32.115256 | orchestrator | 2026-01-10 14:52:32.115260 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-01-10 14:52:32.115264 | orchestrator | Saturday 10 January 2026 14:50:45 +0000 (0:00:00.992) 0:01:29.475 ****** 2026-01-10 14:52:32.115267 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:52:32.115271 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:52:32.115275 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:52:32.115278 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:52:32.115282 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:52:32.115286 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:52:32.115289 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:52:32.115293 | orchestrator | 2026-01-10 14:52:32.115297 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-01-10 14:52:32.115301 | orchestrator | Saturday 10 January 2026 14:50:45 +0000 (0:00:00.718) 0:01:30.193 ****** 2026-01-10 14:52:32.115304 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:52:32.115308 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:52:32.115312 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:52:32.115315 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:52:32.115319 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:52:32.115323 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:52:32.115326 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:52:32.115330 | orchestrator | 2026-01-10 14:52:32.115334 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-01-10 14:52:32.115338 | orchestrator | Saturday 10 January 2026 14:50:46 +0000 (0:00:00.825) 0:01:31.019 ****** 2026-01-10 14:52:32.115343 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-10 14:52:32.115351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:52:32.115355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:52:32.115360 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:52:32.115366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:52:32.115370 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:52:32.115374 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:52:32.115380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:52:32.115392 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:52:32.115401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:52:32.115410 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:52:32.115417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:52:32.115427 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:52:32.115434 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:52:32.115440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:52:32.115450 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:52:32.115461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:52:32.115468 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-10 14:52:32.115479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:52:32.115486 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-10 14:52:32.115492 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-10 14:52:32.115499 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:52:32.115513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:52:32.115518 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-10 14:52:32.115522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:52:32.115526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:52:32.115534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:52:32.115538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:52:32.115542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:52:32.115548 | orchestrator | 2026-01-10 14:52:32.115552 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-01-10 14:52:32.115556 | orchestrator | Saturday 10 January 2026 14:50:51 +0000 (0:00:04.913) 0:01:35.932 ****** 2026-01-10 14:52:32.115560 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-10 14:52:32.115564 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:52:32.115568 | orchestrator | 2026-01-10 14:52:32.115573 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-10 14:52:32.115577 | orchestrator | Saturday 10 January 2026 14:50:52 +0000 (0:00:01.141) 0:01:37.074 ****** 2026-01-10 14:52:32.115581 | orchestrator | 2026-01-10 14:52:32.115585 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-10 14:52:32.115589 | orchestrator | Saturday 10 January 2026 14:50:52 +0000 (0:00:00.062) 0:01:37.137 ****** 2026-01-10 14:52:32.115593 | orchestrator | 2026-01-10 14:52:32.115596 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-10 14:52:32.115600 | orchestrator | Saturday 10 January 2026 14:50:52 +0000 (0:00:00.059) 0:01:37.196 ****** 2026-01-10 14:52:32.115604 | orchestrator | 2026-01-10 14:52:32.115608 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-10 14:52:32.115611 | orchestrator | Saturday 10 January 2026 14:50:53 +0000 (0:00:00.058) 0:01:37.255 ****** 2026-01-10 14:52:32.115615 | orchestrator | 2026-01-10 14:52:32.115619 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-10 14:52:32.115623 | orchestrator | Saturday 10 January 2026 14:50:53 +0000 (0:00:00.174) 0:01:37.429 ****** 2026-01-10 14:52:32.115626 | orchestrator | 2026-01-10 14:52:32.115630 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-10 14:52:32.115634 | orchestrator | Saturday 10 January 2026 14:50:53 +0000 (0:00:00.059) 0:01:37.488 ****** 2026-01-10 14:52:32.115637 | orchestrator | 2026-01-10 14:52:32.115641 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-10 14:52:32.115645 | orchestrator | Saturday 10 January 2026 14:50:53 +0000 (0:00:00.066) 0:01:37.555 ****** 2026-01-10 14:52:32.115649 | orchestrator | 2026-01-10 14:52:32.115652 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-01-10 14:52:32.115656 | orchestrator | Saturday 10 January 2026 14:50:53 +0000 (0:00:00.080) 0:01:37.635 ****** 2026-01-10 14:52:32.115660 | orchestrator | changed: [testbed-manager] 2026-01-10 14:52:32.115664 | orchestrator | 2026-01-10 14:52:32.115667 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-01-10 14:52:32.115671 | orchestrator | Saturday 10 January 2026 14:51:07 +0000 (0:00:13.780) 0:01:51.416 ****** 2026-01-10 14:52:32.115675 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:52:32.115679 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:52:32.115682 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:52:32.115686 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:52:32.115690 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:52:32.115694 | orchestrator | changed: [testbed-manager] 2026-01-10 14:52:32.115697 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:52:32.115701 | orchestrator | 2026-01-10 14:52:32.115705 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-01-10 14:52:32.115708 | orchestrator | Saturday 10 January 2026 14:51:21 +0000 (0:00:14.443) 0:02:05.859 ****** 2026-01-10 14:52:32.115712 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:52:32.115716 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:52:32.115720 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:52:32.115742 | orchestrator | 2026-01-10 14:52:32.115747 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-01-10 14:52:32.115750 | orchestrator | Saturday 10 January 2026 14:51:32 +0000 (0:00:11.030) 0:02:16.890 ****** 2026-01-10 14:52:32.115754 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:52:32.115758 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:52:32.115761 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:52:32.115768 | orchestrator | 2026-01-10 14:52:32.115772 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-01-10 14:52:32.115775 | orchestrator | Saturday 10 January 2026 14:51:43 +0000 (0:00:10.836) 0:02:27.727 ****** 2026-01-10 14:52:32.115779 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:52:32.115783 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:52:32.115789 | orchestrator | changed: [testbed-manager] 2026-01-10 14:52:32.115793 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:52:32.115796 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:52:32.115800 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:52:32.115804 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:52:32.115807 | orchestrator | 2026-01-10 14:52:32.115811 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-01-10 14:52:32.115815 | orchestrator | Saturday 10 January 2026 14:51:58 +0000 (0:00:15.118) 0:02:42.845 ****** 2026-01-10 14:52:32.115819 | orchestrator | changed: [testbed-manager] 2026-01-10 14:52:32.115822 | orchestrator | 2026-01-10 14:52:32.115826 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-01-10 14:52:32.115830 | orchestrator | Saturday 10 January 2026 14:52:07 +0000 (0:00:08.600) 0:02:51.446 ****** 2026-01-10 14:52:32.115834 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:52:32.115837 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:52:32.115841 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:52:32.115845 | orchestrator | 2026-01-10 14:52:32.115849 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-01-10 14:52:32.115852 | orchestrator | Saturday 10 January 2026 14:52:18 +0000 (0:00:11.260) 0:03:02.707 ****** 2026-01-10 14:52:32.115856 | orchestrator | changed: [testbed-manager] 2026-01-10 14:52:32.115860 | orchestrator | 2026-01-10 14:52:32.115863 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-01-10 14:52:32.115867 | orchestrator | Saturday 10 January 2026 14:52:23 +0000 (0:00:05.048) 0:03:07.756 ****** 2026-01-10 14:52:32.115871 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:52:32.115875 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:52:32.115878 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:52:32.115882 | orchestrator | 2026-01-10 14:52:32.115886 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:52:32.115890 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-10 14:52:32.115894 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-10 14:52:32.115900 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-10 14:52:32.115904 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-10 14:52:32.115907 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-10 14:52:32.115911 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-10 14:52:32.115915 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-10 14:52:32.115919 | orchestrator | 2026-01-10 14:52:32.115922 | orchestrator | 2026-01-10 14:52:32.115926 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:52:32.115930 | orchestrator | Saturday 10 January 2026 14:52:28 +0000 (0:00:05.364) 0:03:13.120 ****** 2026-01-10 14:52:32.115934 | orchestrator | =============================================================================== 2026-01-10 14:52:32.115940 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 31.28s 2026-01-10 14:52:32.115944 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 15.12s 2026-01-10 14:52:32.115948 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.44s 2026-01-10 14:52:32.115951 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 14.01s 2026-01-10 14:52:32.115955 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 13.78s 2026-01-10 14:52:32.115959 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 11.26s 2026-01-10 14:52:32.115963 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 11.03s 2026-01-10 14:52:32.115966 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.84s 2026-01-10 14:52:32.115970 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.60s 2026-01-10 14:52:32.115974 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.73s 2026-01-10 14:52:32.115977 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 5.36s 2026-01-10 14:52:32.115981 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.30s 2026-01-10 14:52:32.115986 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.05s 2026-01-10 14:52:32.115992 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.91s 2026-01-10 14:52:32.115997 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.76s 2026-01-10 14:52:32.116001 | orchestrator | prometheus : include_tasks ---------------------------------------------- 3.24s 2026-01-10 14:52:32.116004 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.56s 2026-01-10 14:52:32.116008 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.46s 2026-01-10 14:52:32.116014 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.36s 2026-01-10 14:52:32.116018 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.14s 2026-01-10 14:52:32.116022 | orchestrator | 2026-01-10 14:52:32 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:52:32.116026 | orchestrator | 2026-01-10 14:52:32 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:52:32.116029 | orchestrator | 2026-01-10 14:52:32 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:35.140191 | orchestrator | 2026-01-10 14:52:35 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:52:35.142963 | orchestrator | 2026-01-10 14:52:35 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:52:35.143583 | orchestrator | 2026-01-10 14:52:35 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:52:35.144336 | orchestrator | 2026-01-10 14:52:35 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:52:35.144469 | orchestrator | 2026-01-10 14:52:35 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:38.185279 | orchestrator | 2026-01-10 14:52:38 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:52:38.188390 | orchestrator | 2026-01-10 14:52:38 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:52:38.190876 | orchestrator | 2026-01-10 14:52:38 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:52:38.193741 | orchestrator | 2026-01-10 14:52:38 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:52:38.193808 | orchestrator | 2026-01-10 14:52:38 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:41.224547 | orchestrator | 2026-01-10 14:52:41 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:52:41.227706 | orchestrator | 2026-01-10 14:52:41 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:52:41.229830 | orchestrator | 2026-01-10 14:52:41 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:52:41.231400 | orchestrator | 2026-01-10 14:52:41 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:52:41.231724 | orchestrator | 2026-01-10 14:52:41 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:44.283031 | orchestrator | 2026-01-10 14:52:44 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:52:44.283304 | orchestrator | 2026-01-10 14:52:44 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:52:44.285008 | orchestrator | 2026-01-10 14:52:44 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:52:44.285471 | orchestrator | 2026-01-10 14:52:44 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:52:44.285500 | orchestrator | 2026-01-10 14:52:44 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:47.308549 | orchestrator | 2026-01-10 14:52:47 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:52:47.308756 | orchestrator | 2026-01-10 14:52:47 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:52:47.309569 | orchestrator | 2026-01-10 14:52:47 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:52:47.310308 | orchestrator | 2026-01-10 14:52:47 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:52:47.310330 | orchestrator | 2026-01-10 14:52:47 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:50.339783 | orchestrator | 2026-01-10 14:52:50 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:52:50.341103 | orchestrator | 2026-01-10 14:52:50 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:52:50.343171 | orchestrator | 2026-01-10 14:52:50 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:52:50.345240 | orchestrator | 2026-01-10 14:52:50 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:52:50.345347 | orchestrator | 2026-01-10 14:52:50 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:53.390098 | orchestrator | 2026-01-10 14:52:53 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:52:53.392158 | orchestrator | 2026-01-10 14:52:53 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:52:53.393897 | orchestrator | 2026-01-10 14:52:53 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:52:53.396384 | orchestrator | 2026-01-10 14:52:53 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:52:53.396629 | orchestrator | 2026-01-10 14:52:53 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:56.436561 | orchestrator | 2026-01-10 14:52:56 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:52:56.438248 | orchestrator | 2026-01-10 14:52:56 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:52:56.440662 | orchestrator | 2026-01-10 14:52:56 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:52:56.443403 | orchestrator | 2026-01-10 14:52:56 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:52:56.443469 | orchestrator | 2026-01-10 14:52:56 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:59.497032 | orchestrator | 2026-01-10 14:52:59 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:52:59.498457 | orchestrator | 2026-01-10 14:52:59 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:52:59.499795 | orchestrator | 2026-01-10 14:52:59 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:52:59.501196 | orchestrator | 2026-01-10 14:52:59 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state STARTED 2026-01-10 14:52:59.501328 | orchestrator | 2026-01-10 14:52:59 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:02.556546 | orchestrator | 2026-01-10 14:53:02 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:53:02.559530 | orchestrator | 2026-01-10 14:53:02 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:53:02.563017 | orchestrator | 2026-01-10 14:53:02 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:53:02.565393 | orchestrator | 2026-01-10 14:53:02 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:53:02.568101 | orchestrator | 2026-01-10 14:53:02 | INFO  | Task 08260106-6ef9-4439-a5b4-b55560d7a436 is in state SUCCESS 2026-01-10 14:53:02.569943 | orchestrator | 2026-01-10 14:53:02.569995 | orchestrator | 2026-01-10 14:53:02.570003 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:53:02.570007 | orchestrator | 2026-01-10 14:53:02.570011 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:53:02.570043 | orchestrator | Saturday 10 January 2026 14:50:10 +0000 (0:00:00.234) 0:00:00.235 ****** 2026-01-10 14:53:02.570054 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:53:02.570061 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:53:02.570067 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:53:02.570072 | orchestrator | 2026-01-10 14:53:02.570077 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:53:02.570082 | orchestrator | Saturday 10 January 2026 14:50:10 +0000 (0:00:00.269) 0:00:00.504 ****** 2026-01-10 14:53:02.570088 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-01-10 14:53:02.570094 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-01-10 14:53:02.570099 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-01-10 14:53:02.570105 | orchestrator | 2026-01-10 14:53:02.570111 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-01-10 14:53:02.570116 | orchestrator | 2026-01-10 14:53:02.570122 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-10 14:53:02.570126 | orchestrator | Saturday 10 January 2026 14:50:11 +0000 (0:00:00.379) 0:00:00.884 ****** 2026-01-10 14:53:02.570129 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:53:02.570133 | orchestrator | 2026-01-10 14:53:02.570136 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-01-10 14:53:02.570139 | orchestrator | Saturday 10 January 2026 14:50:11 +0000 (0:00:00.489) 0:00:01.373 ****** 2026-01-10 14:53:02.570142 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-01-10 14:53:02.570145 | orchestrator | 2026-01-10 14:53:02.570148 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-01-10 14:53:02.570151 | orchestrator | Saturday 10 January 2026 14:50:15 +0000 (0:00:03.275) 0:00:04.649 ****** 2026-01-10 14:53:02.570155 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-01-10 14:53:02.570174 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-01-10 14:53:02.570181 | orchestrator | 2026-01-10 14:53:02.570186 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-01-10 14:53:02.570192 | orchestrator | Saturday 10 January 2026 14:50:20 +0000 (0:00:05.747) 0:00:10.396 ****** 2026-01-10 14:53:02.570196 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-10 14:53:02.570199 | orchestrator | 2026-01-10 14:53:02.570275 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-01-10 14:53:02.570280 | orchestrator | Saturday 10 January 2026 14:50:23 +0000 (0:00:02.965) 0:00:13.362 ****** 2026-01-10 14:53:02.570284 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-10 14:53:02.570288 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-01-10 14:53:02.570294 | orchestrator | 2026-01-10 14:53:02.570299 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-01-10 14:53:02.570304 | orchestrator | Saturday 10 January 2026 14:50:27 +0000 (0:00:03.796) 0:00:17.158 ****** 2026-01-10 14:53:02.570310 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-10 14:53:02.570315 | orchestrator | 2026-01-10 14:53:02.570321 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-01-10 14:53:02.570326 | orchestrator | Saturday 10 January 2026 14:50:30 +0000 (0:00:03.204) 0:00:20.362 ****** 2026-01-10 14:53:02.570331 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-01-10 14:53:02.570336 | orchestrator | 2026-01-10 14:53:02.570342 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-01-10 14:53:02.570347 | orchestrator | Saturday 10 January 2026 14:50:34 +0000 (0:00:03.613) 0:00:23.976 ****** 2026-01-10 14:53:02.570373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:53:02.570381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:53:02.570398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:53:02.570405 | orchestrator | 2026-01-10 14:53:02.570410 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-10 14:53:02.570415 | orchestrator | Saturday 10 January 2026 14:50:38 +0000 (0:00:04.556) 0:00:28.532 ****** 2026-01-10 14:53:02.570423 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:53:02.570429 | orchestrator | 2026-01-10 14:53:02.570439 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-01-10 14:53:02.570444 | orchestrator | Saturday 10 January 2026 14:50:40 +0000 (0:00:01.093) 0:00:29.626 ****** 2026-01-10 14:53:02.570450 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:53:02.570455 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:53:02.570461 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:53:02.570466 | orchestrator | 2026-01-10 14:53:02.570471 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-01-10 14:53:02.570477 | orchestrator | Saturday 10 January 2026 14:50:43 +0000 (0:00:03.947) 0:00:33.573 ****** 2026-01-10 14:53:02.570482 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-10 14:53:02.570492 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-10 14:53:02.570496 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-10 14:53:02.570501 | orchestrator | 2026-01-10 14:53:02.570506 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-01-10 14:53:02.570512 | orchestrator | Saturday 10 January 2026 14:50:45 +0000 (0:00:01.419) 0:00:34.993 ****** 2026-01-10 14:53:02.570517 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-10 14:53:02.570523 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-10 14:53:02.570528 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-10 14:53:02.570534 | orchestrator | 2026-01-10 14:53:02.570538 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-01-10 14:53:02.570543 | orchestrator | Saturday 10 January 2026 14:50:46 +0000 (0:00:01.096) 0:00:36.089 ****** 2026-01-10 14:53:02.570548 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:53:02.570554 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:53:02.570559 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:53:02.570564 | orchestrator | 2026-01-10 14:53:02.570568 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-01-10 14:53:02.570573 | orchestrator | Saturday 10 January 2026 14:50:47 +0000 (0:00:00.710) 0:00:36.800 ****** 2026-01-10 14:53:02.570578 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:02.570583 | orchestrator | 2026-01-10 14:53:02.570588 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-01-10 14:53:02.570593 | orchestrator | Saturday 10 January 2026 14:50:47 +0000 (0:00:00.512) 0:00:37.313 ****** 2026-01-10 14:53:02.570598 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:02.570603 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:02.570608 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:02.570613 | orchestrator | 2026-01-10 14:53:02.570618 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-10 14:53:02.570623 | orchestrator | Saturday 10 January 2026 14:50:48 +0000 (0:00:00.393) 0:00:37.706 ****** 2026-01-10 14:53:02.570627 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:53:02.570632 | orchestrator | 2026-01-10 14:53:02.570636 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-01-10 14:53:02.570641 | orchestrator | Saturday 10 January 2026 14:50:48 +0000 (0:00:00.683) 0:00:38.390 ****** 2026-01-10 14:53:02.570655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:53:02.570667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:53:02.570673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:53:02.570693 | orchestrator | 2026-01-10 14:53:02.570698 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-01-10 14:53:02.570709 | orchestrator | Saturday 10 January 2026 14:50:53 +0000 (0:00:04.799) 0:00:43.190 ****** 2026-01-10 14:53:02.570720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-10 14:53:02.570726 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:02.570732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-10 14:53:02.570738 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:02.570750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-10 14:53:02.570760 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:02.570765 | orchestrator | 2026-01-10 14:53:02.570771 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-01-10 14:53:02.570777 | orchestrator | Saturday 10 January 2026 14:50:57 +0000 (0:00:03.693) 0:00:46.884 ****** 2026-01-10 14:53:02.570781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-10 14:53:02.570784 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:02.570791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-10 14:53:02.570797 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:02.570801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-10 14:53:02.570804 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:02.570807 | orchestrator | 2026-01-10 14:53:02.570810 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-01-10 14:53:02.570813 | orchestrator | Saturday 10 January 2026 14:51:01 +0000 (0:00:03.927) 0:00:50.811 ****** 2026-01-10 14:53:02.570817 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:02.570820 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:02.570823 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:02.570826 | orchestrator | 2026-01-10 14:53:02.570829 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-01-10 14:53:02.570832 | orchestrator | Saturday 10 January 2026 14:51:05 +0000 (0:00:04.083) 0:00:54.895 ****** 2026-01-10 14:53:02.570839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:53:02.570849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:53:02.570853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:53:02.570858 | orchestrator | 2026-01-10 14:53:02.570861 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-01-10 14:53:02.570865 | orchestrator | Saturday 10 January 2026 14:51:13 +0000 (0:00:08.243) 0:01:03.138 ****** 2026-01-10 14:53:02.570868 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:53:02.570871 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:53:02.570875 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:53:02.570879 | orchestrator | 2026-01-10 14:53:02.570882 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-01-10 14:53:02.570885 | orchestrator | Saturday 10 January 2026 14:51:18 +0000 (0:00:05.300) 0:01:08.439 ****** 2026-01-10 14:53:02.570888 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:02.570891 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:02.570894 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:02.570897 | orchestrator | 2026-01-10 14:53:02.570901 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-01-10 14:53:02.570904 | orchestrator | Saturday 10 January 2026 14:51:24 +0000 (0:00:05.975) 0:01:14.415 ****** 2026-01-10 14:53:02.570907 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:02.570912 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:02.570915 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:02.570918 | orchestrator | 2026-01-10 14:53:02.570922 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-01-10 14:53:02.570925 | orchestrator | Saturday 10 January 2026 14:51:28 +0000 (0:00:03.440) 0:01:17.855 ****** 2026-01-10 14:53:02.570928 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:02.570931 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:02.570934 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:02.570937 | orchestrator | 2026-01-10 14:53:02.570940 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-01-10 14:53:02.570943 | orchestrator | Saturday 10 January 2026 14:51:31 +0000 (0:00:03.223) 0:01:21.079 ****** 2026-01-10 14:53:02.570947 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:02.570950 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:02.570953 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:02.570956 | orchestrator | 2026-01-10 14:53:02.570959 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-01-10 14:53:02.570962 | orchestrator | Saturday 10 January 2026 14:51:35 +0000 (0:00:04.106) 0:01:25.185 ****** 2026-01-10 14:53:02.570966 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:02.570971 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:02.570977 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:02.570982 | orchestrator | 2026-01-10 14:53:02.570985 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-01-10 14:53:02.570989 | orchestrator | Saturday 10 January 2026 14:51:35 +0000 (0:00:00.286) 0:01:25.472 ****** 2026-01-10 14:53:02.570992 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-10 14:53:02.570996 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:02.571001 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-10 14:53:02.571007 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:02.571012 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-10 14:53:02.571020 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:02.571023 | orchestrator | 2026-01-10 14:53:02.571026 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-01-10 14:53:02.571029 | orchestrator | Saturday 10 January 2026 14:51:39 +0000 (0:00:03.362) 0:01:28.835 ****** 2026-01-10 14:53:02.571032 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:53:02.571035 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:53:02.571038 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:53:02.571041 | orchestrator | 2026-01-10 14:53:02.571045 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-01-10 14:53:02.571048 | orchestrator | Saturday 10 January 2026 14:51:42 +0000 (0:00:03.563) 0:01:32.398 ****** 2026-01-10 14:53:02.571053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:53:02.571059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:53:02.571066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:53:02.571069 | orchestrator | 2026-01-10 14:53:02.571072 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-10 14:53:02.571075 | orchestrator | Saturday 10 January 2026 14:51:49 +0000 (0:00:06.987) 0:01:39.386 ****** 2026-01-10 14:53:02.571078 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:02.571082 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:02.571085 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:02.571088 | orchestrator | 2026-01-10 14:53:02.571091 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-01-10 14:53:02.571094 | orchestrator | Saturday 10 January 2026 14:51:50 +0000 (0:00:00.283) 0:01:39.670 ****** 2026-01-10 14:53:02.571097 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:53:02.571100 | orchestrator | 2026-01-10 14:53:02.571103 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-01-10 14:53:02.571108 | orchestrator | Saturday 10 January 2026 14:51:52 +0000 (0:00:02.154) 0:01:41.824 ****** 2026-01-10 14:53:02.571115 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:53:02.571120 | orchestrator | 2026-01-10 14:53:02.571123 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-01-10 14:53:02.571126 | orchestrator | Saturday 10 January 2026 14:51:54 +0000 (0:00:02.122) 0:01:43.946 ****** 2026-01-10 14:53:02.571129 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:53:02.571132 | orchestrator | 2026-01-10 14:53:02.571137 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-01-10 14:53:02.571142 | orchestrator | Saturday 10 January 2026 14:51:56 +0000 (0:00:01.751) 0:01:45.697 ****** 2026-01-10 14:53:02.571146 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:53:02.571154 | orchestrator | 2026-01-10 14:53:02.571160 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-01-10 14:53:02.571168 | orchestrator | Saturday 10 January 2026 14:52:23 +0000 (0:00:26.945) 0:02:12.642 ****** 2026-01-10 14:53:02.571173 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:53:02.571179 | orchestrator | 2026-01-10 14:53:02.571184 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-10 14:53:02.571190 | orchestrator | Saturday 10 January 2026 14:52:25 +0000 (0:00:01.968) 0:02:14.611 ****** 2026-01-10 14:53:02.571199 | orchestrator | 2026-01-10 14:53:02.571205 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-10 14:53:02.571210 | orchestrator | Saturday 10 January 2026 14:52:25 +0000 (0:00:00.426) 0:02:15.038 ****** 2026-01-10 14:53:02.571215 | orchestrator | 2026-01-10 14:53:02.571221 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-10 14:53:02.571226 | orchestrator | Saturday 10 January 2026 14:52:25 +0000 (0:00:00.078) 0:02:15.117 ****** 2026-01-10 14:53:02.571231 | orchestrator | 2026-01-10 14:53:02.571236 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-01-10 14:53:02.571240 | orchestrator | Saturday 10 January 2026 14:52:25 +0000 (0:00:00.073) 0:02:15.190 ****** 2026-01-10 14:53:02.571245 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:53:02.571250 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:53:02.571255 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:53:02.571261 | orchestrator | 2026-01-10 14:53:02.571266 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:53:02.571272 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-10 14:53:02.571278 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-10 14:53:02.571284 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-10 14:53:02.571289 | orchestrator | 2026-01-10 14:53:02.571294 | orchestrator | 2026-01-10 14:53:02.571300 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:53:02.571304 | orchestrator | Saturday 10 January 2026 14:52:59 +0000 (0:00:33.933) 0:02:49.124 ****** 2026-01-10 14:53:02.571307 | orchestrator | =============================================================================== 2026-01-10 14:53:02.571311 | orchestrator | glance : Restart glance-api container ---------------------------------- 33.93s 2026-01-10 14:53:02.571314 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 26.95s 2026-01-10 14:53:02.571317 | orchestrator | glance : Copying over config.json files for services -------------------- 8.24s 2026-01-10 14:53:02.571320 | orchestrator | glance : Check glance containers ---------------------------------------- 6.99s 2026-01-10 14:53:02.571323 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 5.98s 2026-01-10 14:53:02.571326 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 5.75s 2026-01-10 14:53:02.571329 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.30s 2026-01-10 14:53:02.571332 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.80s 2026-01-10 14:53:02.571337 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.56s 2026-01-10 14:53:02.571341 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 4.11s 2026-01-10 14:53:02.571356 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.08s 2026-01-10 14:53:02.571362 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.95s 2026-01-10 14:53:02.571367 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.93s 2026-01-10 14:53:02.571372 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.80s 2026-01-10 14:53:02.571377 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.69s 2026-01-10 14:53:02.571382 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.61s 2026-01-10 14:53:02.571387 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 3.56s 2026-01-10 14:53:02.571393 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.44s 2026-01-10 14:53:02.571399 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.36s 2026-01-10 14:53:02.571408 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.28s 2026-01-10 14:53:02.571414 | orchestrator | 2026-01-10 14:53:02 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:05.620443 | orchestrator | 2026-01-10 14:53:05 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:53:05.622519 | orchestrator | 2026-01-10 14:53:05 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:53:05.624496 | orchestrator | 2026-01-10 14:53:05 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:53:05.626271 | orchestrator | 2026-01-10 14:53:05 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:53:05.626323 | orchestrator | 2026-01-10 14:53:05 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:08.679215 | orchestrator | 2026-01-10 14:53:08 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:53:08.683142 | orchestrator | 2026-01-10 14:53:08 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:53:08.685371 | orchestrator | 2026-01-10 14:53:08 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:53:08.687295 | orchestrator | 2026-01-10 14:53:08 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:53:08.687495 | orchestrator | 2026-01-10 14:53:08 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:11.714155 | orchestrator | 2026-01-10 14:53:11 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:53:11.714541 | orchestrator | 2026-01-10 14:53:11 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:53:11.715506 | orchestrator | 2026-01-10 14:53:11 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:53:11.716486 | orchestrator | 2026-01-10 14:53:11 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:53:11.716517 | orchestrator | 2026-01-10 14:53:11 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:14.758889 | orchestrator | 2026-01-10 14:53:14 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state STARTED 2026-01-10 14:53:14.762556 | orchestrator | 2026-01-10 14:53:14 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:53:14.763330 | orchestrator | 2026-01-10 14:53:14 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:53:14.764203 | orchestrator | 2026-01-10 14:53:14 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:53:14.764234 | orchestrator | 2026-01-10 14:53:14 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:17.803062 | orchestrator | 2026-01-10 14:53:17 | INFO  | Task c00b516f-16a9-42ea-a1ca-3fe42da79d38 is in state SUCCESS 2026-01-10 14:53:17.803972 | orchestrator | 2026-01-10 14:53:17.804001 | orchestrator | 2026-01-10 14:53:17.804005 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:53:17.804010 | orchestrator | 2026-01-10 14:53:17.804014 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:53:17.804018 | orchestrator | Saturday 10 January 2026 14:50:17 +0000 (0:00:00.374) 0:00:00.374 ****** 2026-01-10 14:53:17.804021 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:53:17.804026 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:53:17.804030 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:53:17.804033 | orchestrator | 2026-01-10 14:53:17.804037 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:53:17.804054 | orchestrator | Saturday 10 January 2026 14:50:17 +0000 (0:00:00.449) 0:00:00.824 ****** 2026-01-10 14:53:17.804058 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-01-10 14:53:17.804061 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-01-10 14:53:17.804065 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-01-10 14:53:17.804069 | orchestrator | 2026-01-10 14:53:17.804072 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-01-10 14:53:17.804075 | orchestrator | 2026-01-10 14:53:17.804096 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-10 14:53:17.804102 | orchestrator | Saturday 10 January 2026 14:50:17 +0000 (0:00:00.411) 0:00:01.235 ****** 2026-01-10 14:53:17.804107 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:53:17.804113 | orchestrator | 2026-01-10 14:53:17.804118 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-01-10 14:53:17.804123 | orchestrator | Saturday 10 January 2026 14:50:18 +0000 (0:00:00.732) 0:00:01.968 ****** 2026-01-10 14:53:17.804128 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-01-10 14:53:17.804133 | orchestrator | 2026-01-10 14:53:17.804138 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-01-10 14:53:17.804143 | orchestrator | Saturday 10 January 2026 14:50:21 +0000 (0:00:03.022) 0:00:04.990 ****** 2026-01-10 14:53:17.804148 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-01-10 14:53:17.804182 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-01-10 14:53:17.804221 | orchestrator | 2026-01-10 14:53:17.804228 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-01-10 14:53:17.804233 | orchestrator | Saturday 10 January 2026 14:50:28 +0000 (0:00:06.346) 0:00:11.337 ****** 2026-01-10 14:53:17.804248 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-10 14:53:17.804254 | orchestrator | 2026-01-10 14:53:17.804259 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-01-10 14:53:17.804264 | orchestrator | Saturday 10 January 2026 14:50:31 +0000 (0:00:02.997) 0:00:14.335 ****** 2026-01-10 14:53:17.804270 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-10 14:53:17.804275 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-01-10 14:53:17.804281 | orchestrator | 2026-01-10 14:53:17.804286 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-01-10 14:53:17.804292 | orchestrator | Saturday 10 January 2026 14:50:35 +0000 (0:00:04.264) 0:00:18.600 ****** 2026-01-10 14:53:17.804298 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-10 14:53:17.804301 | orchestrator | 2026-01-10 14:53:17.804305 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-01-10 14:53:17.804310 | orchestrator | Saturday 10 January 2026 14:50:39 +0000 (0:00:03.875) 0:00:22.475 ****** 2026-01-10 14:53:17.804316 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-01-10 14:53:17.804321 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-01-10 14:53:17.804326 | orchestrator | 2026-01-10 14:53:17.804331 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-01-10 14:53:17.804835 | orchestrator | Saturday 10 January 2026 14:50:46 +0000 (0:00:07.791) 0:00:30.267 ****** 2026-01-10 14:53:17.804858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-10 14:53:17.804895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-10 14:53:17.804901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-10 14:53:17.804911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.804915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.804918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.804926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.804939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.804943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.804948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.804951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.804955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.804961 | orchestrator | 2026-01-10 14:53:17.804965 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-10 14:53:17.804968 | orchestrator | Saturday 10 January 2026 14:50:49 +0000 (0:00:02.720) 0:00:32.987 ****** 2026-01-10 14:53:17.804971 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:17.804974 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:17.804977 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:17.804980 | orchestrator | 2026-01-10 14:53:17.804983 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-10 14:53:17.804986 | orchestrator | Saturday 10 January 2026 14:50:50 +0000 (0:00:00.540) 0:00:33.528 ****** 2026-01-10 14:53:17.804990 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:53:17.804993 | orchestrator | 2026-01-10 14:53:17.805004 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-01-10 14:53:17.805007 | orchestrator | Saturday 10 January 2026 14:50:51 +0000 (0:00:00.836) 0:00:34.365 ****** 2026-01-10 14:53:17.805010 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-01-10 14:53:17.805013 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-01-10 14:53:17.805016 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-01-10 14:53:17.805020 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-01-10 14:53:17.805023 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-01-10 14:53:17.805026 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-01-10 14:53:17.805029 | orchestrator | 2026-01-10 14:53:17.805032 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-01-10 14:53:17.805035 | orchestrator | Saturday 10 January 2026 14:50:53 +0000 (0:00:02.156) 0:00:36.521 ****** 2026-01-10 14:53:17.805038 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-10 14:53:17.805044 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-10 14:53:17.805050 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-10 14:53:17.805053 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-10 14:53:17.805064 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-10 14:53:17.805068 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-10 14:53:17.805073 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-10 14:53:17.805078 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-10 14:53:17.805082 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-10 14:53:17.805093 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-10 14:53:17.805097 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-10 14:53:17.805103 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-10 14:53:17.805109 | orchestrator | 2026-01-10 14:53:17.805112 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-01-10 14:53:17.805115 | orchestrator | Saturday 10 January 2026 14:50:56 +0000 (0:00:03.661) 0:00:40.183 ****** 2026-01-10 14:53:17.805118 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-01-10 14:53:17.805122 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-01-10 14:53:17.805125 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-01-10 14:53:17.805128 | orchestrator | 2026-01-10 14:53:17.805131 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-01-10 14:53:17.805134 | orchestrator | Saturday 10 January 2026 14:50:58 +0000 (0:00:01.927) 0:00:42.111 ****** 2026-01-10 14:53:17.805165 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-01-10 14:53:17.805172 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-01-10 14:53:17.805178 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-01-10 14:53:17.805183 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-01-10 14:53:17.805188 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-01-10 14:53:17.805194 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-01-10 14:53:17.805198 | orchestrator | 2026-01-10 14:53:17.805203 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-01-10 14:53:17.805208 | orchestrator | Saturday 10 January 2026 14:51:02 +0000 (0:00:03.444) 0:00:45.555 ****** 2026-01-10 14:53:17.805212 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-01-10 14:53:17.805217 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-01-10 14:53:17.805221 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-01-10 14:53:17.805226 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-01-10 14:53:17.805230 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-01-10 14:53:17.805235 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-01-10 14:53:17.805239 | orchestrator | 2026-01-10 14:53:17.805244 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-01-10 14:53:17.805249 | orchestrator | Saturday 10 January 2026 14:51:03 +0000 (0:00:01.516) 0:00:47.072 ****** 2026-01-10 14:53:17.805254 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:17.805258 | orchestrator | 2026-01-10 14:53:17.805263 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-01-10 14:53:17.805268 | orchestrator | Saturday 10 January 2026 14:51:04 +0000 (0:00:00.253) 0:00:47.326 ****** 2026-01-10 14:53:17.805272 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:17.805277 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:17.805298 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:17.805304 | orchestrator | 2026-01-10 14:53:17.805309 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-10 14:53:17.805314 | orchestrator | Saturday 10 January 2026 14:51:04 +0000 (0:00:00.474) 0:00:47.800 ****** 2026-01-10 14:53:17.805319 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:53:17.805324 | orchestrator | 2026-01-10 14:53:17.805329 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-01-10 14:53:17.805334 | orchestrator | Saturday 10 January 2026 14:51:05 +0000 (0:00:00.906) 0:00:48.707 ****** 2026-01-10 14:53:17.805339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-10 14:53:17.805353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-10 14:53:17.805358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-10 14:53:17.805364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.805385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.805392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.805400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.805409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.805414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.805419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.805427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.805433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.805441 | orchestrator | 2026-01-10 14:53:17.805446 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-01-10 14:53:17.805451 | orchestrator | Saturday 10 January 2026 14:51:10 +0000 (0:00:04.912) 0:00:53.619 ****** 2026-01-10 14:53:17.805460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-10 14:53:17.805465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:53:17.805470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:53:17.805480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:53:17.805486 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:17.805490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-10 14:53:17.805496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:53:17.805502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:53:17.805507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:53:17.805512 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:17.805520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-10 14:53:17.805530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:53:17.805539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:53:17.805547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:53:17.805552 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:17.805557 | orchestrator | 2026-01-10 14:53:17.805562 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-01-10 14:53:17.805565 | orchestrator | Saturday 10 January 2026 14:51:12 +0000 (0:00:02.042) 0:00:55.662 ****** 2026-01-10 14:53:17.805569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-10 14:53:17.805572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:53:17.805578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:53:17.805584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:53:17.805587 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:17.805593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-10 14:53:17.805596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:53:17.805599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:53:17.805602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:53:17.805609 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:17.805613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-10 14:53:17.805616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:53:17.805622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:53:17.805626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:53:17.805629 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:17.805632 | orchestrator | 2026-01-10 14:53:17.805635 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-01-10 14:53:17.805638 | orchestrator | Saturday 10 January 2026 14:51:14 +0000 (0:00:01.814) 0:00:57.476 ****** 2026-01-10 14:53:17.805641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-10 14:53:17.805649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-10 14:53:17.805669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-10 14:53:17.805675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.805680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.805686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.805700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.805709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.805714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.805721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.805726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.805734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.805739 | orchestrator | 2026-01-10 14:53:17.805744 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-01-10 14:53:17.805749 | orchestrator | Saturday 10 January 2026 14:51:18 +0000 (0:00:04.138) 0:01:01.615 ****** 2026-01-10 14:53:17.805755 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-01-10 14:53:17.805764 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-01-10 14:53:17.805769 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-01-10 14:53:17.805774 | orchestrator | 2026-01-10 14:53:17.805781 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-01-10 14:53:17.805784 | orchestrator | Saturday 10 January 2026 14:51:19 +0000 (0:00:01.321) 0:01:02.937 ****** 2026-01-10 14:53:17.805787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-10 14:53:17.805793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-10 14:53:17.805796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-10 14:53:17.805802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.805809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.805813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.805816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.805820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.805824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.805830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.805836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.805854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.805858 | orchestrator | 2026-01-10 14:53:17.805862 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-01-10 14:53:17.805865 | orchestrator | Saturday 10 January 2026 14:51:33 +0000 (0:00:13.546) 0:01:16.483 ****** 2026-01-10 14:53:17.805869 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:53:17.805872 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:53:17.805876 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:53:17.805879 | orchestrator | 2026-01-10 14:53:17.805883 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-01-10 14:53:17.805886 | orchestrator | Saturday 10 January 2026 14:51:36 +0000 (0:00:03.021) 0:01:19.505 ****** 2026-01-10 14:53:17.805891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-10 14:53:17.805897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:53:17.805901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:53:17.805907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:53:17.805910 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:17.805914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-10 14:53:17.805918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:53:17.805923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-10 14:53:17.805929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:53:17.805934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:53:17.805938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:53:17.805942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:53:17.805947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:53:17.805954 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:17.805958 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:17.805961 | orchestrator | 2026-01-10 14:53:17.805965 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-01-10 14:53:17.805968 | orchestrator | Saturday 10 January 2026 14:51:36 +0000 (0:00:00.802) 0:01:20.307 ****** 2026-01-10 14:53:17.805972 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:17.805975 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:17.805978 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:17.805982 | orchestrator | 2026-01-10 14:53:17.805985 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-01-10 14:53:17.805989 | orchestrator | Saturday 10 January 2026 14:51:37 +0000 (0:00:00.386) 0:01:20.694 ****** 2026-01-10 14:53:17.805992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-10 14:53:17.805998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-10 14:53:17.806004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-10 14:53:17.806047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.806059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.806064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.806069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.806078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.806084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.806092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.806101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.806106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:53:17.806111 | orchestrator | 2026-01-10 14:53:17.806116 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-10 14:53:17.806121 | orchestrator | Saturday 10 January 2026 14:51:40 +0000 (0:00:02.948) 0:01:23.642 ****** 2026-01-10 14:53:17.806126 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:17.806131 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:17.806136 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:17.806141 | orchestrator | 2026-01-10 14:53:17.806146 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-01-10 14:53:17.806151 | orchestrator | Saturday 10 January 2026 14:51:40 +0000 (0:00:00.410) 0:01:24.053 ****** 2026-01-10 14:53:17.806155 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:53:17.806160 | orchestrator | 2026-01-10 14:53:17.806165 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-01-10 14:53:17.806170 | orchestrator | Saturday 10 January 2026 14:51:42 +0000 (0:00:01.926) 0:01:25.980 ****** 2026-01-10 14:53:17.806174 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:53:17.806179 | orchestrator | 2026-01-10 14:53:17.806184 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-01-10 14:53:17.806193 | orchestrator | Saturday 10 January 2026 14:51:45 +0000 (0:00:02.489) 0:01:28.470 ****** 2026-01-10 14:53:17.806198 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:53:17.806204 | orchestrator | 2026-01-10 14:53:17.806209 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-10 14:53:17.806214 | orchestrator | Saturday 10 January 2026 14:52:06 +0000 (0:00:20.940) 0:01:49.410 ****** 2026-01-10 14:53:17.806219 | orchestrator | 2026-01-10 14:53:17.806224 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-10 14:53:17.806230 | orchestrator | Saturday 10 January 2026 14:52:06 +0000 (0:00:00.090) 0:01:49.501 ****** 2026-01-10 14:53:17.806235 | orchestrator | 2026-01-10 14:53:17.806240 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-10 14:53:17.806250 | orchestrator | Saturday 10 January 2026 14:52:06 +0000 (0:00:00.088) 0:01:49.589 ****** 2026-01-10 14:53:17.806255 | orchestrator | 2026-01-10 14:53:17.806260 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-01-10 14:53:17.806265 | orchestrator | Saturday 10 January 2026 14:52:06 +0000 (0:00:00.083) 0:01:49.673 ****** 2026-01-10 14:53:17.806270 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:53:17.806274 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:53:17.806279 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:53:17.806284 | orchestrator | 2026-01-10 14:53:17.806288 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-01-10 14:53:17.806293 | orchestrator | Saturday 10 January 2026 14:52:31 +0000 (0:00:25.256) 0:02:14.929 ****** 2026-01-10 14:53:17.806297 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:53:17.806302 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:53:17.806307 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:53:17.806311 | orchestrator | 2026-01-10 14:53:17.806316 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-01-10 14:53:17.806321 | orchestrator | Saturday 10 January 2026 14:52:43 +0000 (0:00:11.568) 0:02:26.498 ****** 2026-01-10 14:53:17.806326 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:53:17.806332 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:53:17.806336 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:53:17.806342 | orchestrator | 2026-01-10 14:53:17.806347 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-01-10 14:53:17.806352 | orchestrator | Saturday 10 January 2026 14:53:08 +0000 (0:00:25.458) 0:02:51.957 ****** 2026-01-10 14:53:17.806357 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:53:17.806362 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:53:17.806368 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:53:17.806373 | orchestrator | 2026-01-10 14:53:17.806378 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-01-10 14:53:17.806384 | orchestrator | Saturday 10 January 2026 14:53:15 +0000 (0:00:06.678) 0:02:58.635 ****** 2026-01-10 14:53:17.806392 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:17.806397 | orchestrator | 2026-01-10 14:53:17.806402 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:53:17.806407 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-10 14:53:17.806413 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:53:17.806419 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:53:17.806424 | orchestrator | 2026-01-10 14:53:17.806429 | orchestrator | 2026-01-10 14:53:17.806434 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:53:17.806440 | orchestrator | Saturday 10 January 2026 14:53:15 +0000 (0:00:00.264) 0:02:58.900 ****** 2026-01-10 14:53:17.806445 | orchestrator | =============================================================================== 2026-01-10 14:53:17.806450 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 25.46s 2026-01-10 14:53:17.806455 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 25.26s 2026-01-10 14:53:17.806460 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 20.94s 2026-01-10 14:53:17.806466 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 13.55s 2026-01-10 14:53:17.806471 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 11.57s 2026-01-10 14:53:17.806476 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.79s 2026-01-10 14:53:17.806481 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 6.68s 2026-01-10 14:53:17.806490 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.35s 2026-01-10 14:53:17.806495 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.91s 2026-01-10 14:53:17.806500 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.26s 2026-01-10 14:53:17.806505 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.14s 2026-01-10 14:53:17.806510 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.88s 2026-01-10 14:53:17.806515 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.66s 2026-01-10 14:53:17.806521 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.44s 2026-01-10 14:53:17.806526 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.02s 2026-01-10 14:53:17.806531 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 3.02s 2026-01-10 14:53:17.806536 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.00s 2026-01-10 14:53:17.806545 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.95s 2026-01-10 14:53:17.806551 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.72s 2026-01-10 14:53:17.806555 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.49s 2026-01-10 14:53:17.806560 | orchestrator | 2026-01-10 14:53:17 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:53:17.806816 | orchestrator | 2026-01-10 14:53:17 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:53:17.808499 | orchestrator | 2026-01-10 14:53:17 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:53:17.808547 | orchestrator | 2026-01-10 14:53:17 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:20.850237 | orchestrator | 2026-01-10 14:53:20 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:53:20.852744 | orchestrator | 2026-01-10 14:53:20 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:53:20.854886 | orchestrator | 2026-01-10 14:53:20 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:53:20.854926 | orchestrator | 2026-01-10 14:53:20 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:23.886463 | orchestrator | 2026-01-10 14:53:23 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:53:23.886513 | orchestrator | 2026-01-10 14:53:23 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:53:23.887020 | orchestrator | 2026-01-10 14:53:23 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:53:23.887098 | orchestrator | 2026-01-10 14:53:23 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:26.930831 | orchestrator | 2026-01-10 14:53:26 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:53:26.932233 | orchestrator | 2026-01-10 14:53:26 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:53:26.934176 | orchestrator | 2026-01-10 14:53:26 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:53:26.934215 | orchestrator | 2026-01-10 14:53:26 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:29.969595 | orchestrator | 2026-01-10 14:53:29 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:53:29.970461 | orchestrator | 2026-01-10 14:53:29 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:53:29.971572 | orchestrator | 2026-01-10 14:53:29 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:53:29.971618 | orchestrator | 2026-01-10 14:53:29 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:33.016508 | orchestrator | 2026-01-10 14:53:33 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:53:33.017144 | orchestrator | 2026-01-10 14:53:33 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:53:33.019072 | orchestrator | 2026-01-10 14:53:33 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:53:33.019117 | orchestrator | 2026-01-10 14:53:33 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:36.063486 | orchestrator | 2026-01-10 14:53:36 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:53:36.065905 | orchestrator | 2026-01-10 14:53:36 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:53:36.068489 | orchestrator | 2026-01-10 14:53:36 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:53:36.068574 | orchestrator | 2026-01-10 14:53:36 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:39.114208 | orchestrator | 2026-01-10 14:53:39 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:53:39.114278 | orchestrator | 2026-01-10 14:53:39 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:53:39.116036 | orchestrator | 2026-01-10 14:53:39 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:53:39.116070 | orchestrator | 2026-01-10 14:53:39 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:42.161493 | orchestrator | 2026-01-10 14:53:42 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:53:42.163465 | orchestrator | 2026-01-10 14:53:42 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:53:42.164541 | orchestrator | 2026-01-10 14:53:42 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:53:42.164582 | orchestrator | 2026-01-10 14:53:42 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:45.204728 | orchestrator | 2026-01-10 14:53:45 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:53:45.208226 | orchestrator | 2026-01-10 14:53:45 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:53:45.210945 | orchestrator | 2026-01-10 14:53:45 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:53:45.211117 | orchestrator | 2026-01-10 14:53:45 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:48.261965 | orchestrator | 2026-01-10 14:53:48 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:53:48.263446 | orchestrator | 2026-01-10 14:53:48 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:53:48.265225 | orchestrator | 2026-01-10 14:53:48 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:53:48.265268 | orchestrator | 2026-01-10 14:53:48 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:51.301963 | orchestrator | 2026-01-10 14:53:51 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:53:51.303806 | orchestrator | 2026-01-10 14:53:51 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:53:51.305324 | orchestrator | 2026-01-10 14:53:51 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:53:51.305419 | orchestrator | 2026-01-10 14:53:51 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:54.344877 | orchestrator | 2026-01-10 14:53:54 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:53:54.345568 | orchestrator | 2026-01-10 14:53:54 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:53:54.346669 | orchestrator | 2026-01-10 14:53:54 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:53:54.346703 | orchestrator | 2026-01-10 14:53:54 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:57.381756 | orchestrator | 2026-01-10 14:53:57 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:53:57.384560 | orchestrator | 2026-01-10 14:53:57 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:53:57.384624 | orchestrator | 2026-01-10 14:53:57 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:53:57.384630 | orchestrator | 2026-01-10 14:53:57 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:00.432564 | orchestrator | 2026-01-10 14:54:00 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:54:00.434278 | orchestrator | 2026-01-10 14:54:00 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:54:00.435897 | orchestrator | 2026-01-10 14:54:00 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:54:00.435941 | orchestrator | 2026-01-10 14:54:00 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:03.486737 | orchestrator | 2026-01-10 14:54:03 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:54:03.487917 | orchestrator | 2026-01-10 14:54:03 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:54:03.489472 | orchestrator | 2026-01-10 14:54:03 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:54:03.489520 | orchestrator | 2026-01-10 14:54:03 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:06.533221 | orchestrator | 2026-01-10 14:54:06 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:54:06.534665 | orchestrator | 2026-01-10 14:54:06 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:54:06.535757 | orchestrator | 2026-01-10 14:54:06 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:54:06.535970 | orchestrator | 2026-01-10 14:54:06 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:09.591467 | orchestrator | 2026-01-10 14:54:09 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:54:09.594371 | orchestrator | 2026-01-10 14:54:09 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:54:09.596440 | orchestrator | 2026-01-10 14:54:09 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:54:09.596782 | orchestrator | 2026-01-10 14:54:09 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:12.643327 | orchestrator | 2026-01-10 14:54:12 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:54:12.643543 | orchestrator | 2026-01-10 14:54:12 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:54:12.644796 | orchestrator | 2026-01-10 14:54:12 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:54:12.644830 | orchestrator | 2026-01-10 14:54:12 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:15.685220 | orchestrator | 2026-01-10 14:54:15 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:54:15.685582 | orchestrator | 2026-01-10 14:54:15 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:54:15.686105 | orchestrator | 2026-01-10 14:54:15 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:54:15.686136 | orchestrator | 2026-01-10 14:54:15 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:18.755189 | orchestrator | 2026-01-10 14:54:18 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:54:18.758312 | orchestrator | 2026-01-10 14:54:18 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:54:18.763103 | orchestrator | 2026-01-10 14:54:18 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:54:18.763157 | orchestrator | 2026-01-10 14:54:18 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:21.813457 | orchestrator | 2026-01-10 14:54:21 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:54:21.815974 | orchestrator | 2026-01-10 14:54:21 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:54:21.817517 | orchestrator | 2026-01-10 14:54:21 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:54:21.817567 | orchestrator | 2026-01-10 14:54:21 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:24.864995 | orchestrator | 2026-01-10 14:54:24 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:54:24.865965 | orchestrator | 2026-01-10 14:54:24 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:54:24.867524 | orchestrator | 2026-01-10 14:54:24 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:54:24.867604 | orchestrator | 2026-01-10 14:54:24 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:27.912639 | orchestrator | 2026-01-10 14:54:27 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:54:27.913895 | orchestrator | 2026-01-10 14:54:27 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:54:27.915339 | orchestrator | 2026-01-10 14:54:27 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:54:27.915619 | orchestrator | 2026-01-10 14:54:27 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:30.964655 | orchestrator | 2026-01-10 14:54:30 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:54:30.966425 | orchestrator | 2026-01-10 14:54:30 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:54:30.968413 | orchestrator | 2026-01-10 14:54:30 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:54:30.968452 | orchestrator | 2026-01-10 14:54:30 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:34.011774 | orchestrator | 2026-01-10 14:54:34 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:54:34.014553 | orchestrator | 2026-01-10 14:54:34 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:54:34.016313 | orchestrator | 2026-01-10 14:54:34 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:54:34.016358 | orchestrator | 2026-01-10 14:54:34 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:37.064728 | orchestrator | 2026-01-10 14:54:37 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:54:37.066308 | orchestrator | 2026-01-10 14:54:37 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:54:37.068552 | orchestrator | 2026-01-10 14:54:37 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:54:37.068599 | orchestrator | 2026-01-10 14:54:37 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:40.101873 | orchestrator | 2026-01-10 14:54:40 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state STARTED 2026-01-10 14:54:40.103304 | orchestrator | 2026-01-10 14:54:40 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:54:40.104065 | orchestrator | 2026-01-10 14:54:40 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:54:40.104110 | orchestrator | 2026-01-10 14:54:40 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:43.147662 | orchestrator | 2026-01-10 14:54:43 | INFO  | Task 9295f051-c83d-4adb-aff1-f21d80b87b9c is in state SUCCESS 2026-01-10 14:54:43.149256 | orchestrator | 2026-01-10 14:54:43 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:54:43.150862 | orchestrator | 2026-01-10 14:54:43 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:54:43.150927 | orchestrator | 2026-01-10 14:54:43 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:46.190611 | orchestrator | 2026-01-10 14:54:46 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:54:46.192256 | orchestrator | 2026-01-10 14:54:46 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:54:46.193162 | orchestrator | 2026-01-10 14:54:46 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:54:46.193210 | orchestrator | 2026-01-10 14:54:46 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:49.234857 | orchestrator | 2026-01-10 14:54:49 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:54:49.236828 | orchestrator | 2026-01-10 14:54:49 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:54:49.238653 | orchestrator | 2026-01-10 14:54:49 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:54:49.238697 | orchestrator | 2026-01-10 14:54:49 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:52.284018 | orchestrator | 2026-01-10 14:54:52 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:54:52.285930 | orchestrator | 2026-01-10 14:54:52 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:54:52.288362 | orchestrator | 2026-01-10 14:54:52 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:54:52.288416 | orchestrator | 2026-01-10 14:54:52 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:55.328581 | orchestrator | 2026-01-10 14:54:55 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:54:55.331704 | orchestrator | 2026-01-10 14:54:55 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:54:55.332994 | orchestrator | 2026-01-10 14:54:55 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:54:55.333025 | orchestrator | 2026-01-10 14:54:55 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:58.364968 | orchestrator | 2026-01-10 14:54:58 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:54:58.365568 | orchestrator | 2026-01-10 14:54:58 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:54:58.366267 | orchestrator | 2026-01-10 14:54:58 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:54:58.366292 | orchestrator | 2026-01-10 14:54:58 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:01.395426 | orchestrator | 2026-01-10 14:55:01 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:55:01.396392 | orchestrator | 2026-01-10 14:55:01 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:55:01.399193 | orchestrator | 2026-01-10 14:55:01 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:55:01.399234 | orchestrator | 2026-01-10 14:55:01 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:04.434444 | orchestrator | 2026-01-10 14:55:04 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:55:04.434592 | orchestrator | 2026-01-10 14:55:04 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:55:04.436031 | orchestrator | 2026-01-10 14:55:04 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:55:04.436082 | orchestrator | 2026-01-10 14:55:04 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:07.473998 | orchestrator | 2026-01-10 14:55:07 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:55:07.476330 | orchestrator | 2026-01-10 14:55:07 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:55:07.477245 | orchestrator | 2026-01-10 14:55:07 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:55:07.477290 | orchestrator | 2026-01-10 14:55:07 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:10.531694 | orchestrator | 2026-01-10 14:55:10 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:55:10.532954 | orchestrator | 2026-01-10 14:55:10 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:55:10.535906 | orchestrator | 2026-01-10 14:55:10 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:55:10.535998 | orchestrator | 2026-01-10 14:55:10 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:13.579984 | orchestrator | 2026-01-10 14:55:13 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:55:13.582126 | orchestrator | 2026-01-10 14:55:13 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:55:13.584440 | orchestrator | 2026-01-10 14:55:13 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state STARTED 2026-01-10 14:55:13.584601 | orchestrator | 2026-01-10 14:55:13 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:16.629677 | orchestrator | 2026-01-10 14:55:16 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:55:16.631759 | orchestrator | 2026-01-10 14:55:16 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:55:16.634585 | orchestrator | 2026-01-10 14:55:16 | INFO  | Task 5623d6c4-2e01-4726-8f56-a3739e32c524 is in state SUCCESS 2026-01-10 14:55:16.636424 | orchestrator | 2026-01-10 14:55:16.636515 | orchestrator | 2026-01-10 14:55:16.636527 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:55:16.636535 | orchestrator | 2026-01-10 14:55:16.636541 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:55:16.636548 | orchestrator | Saturday 10 January 2026 14:52:35 +0000 (0:00:00.298) 0:00:00.298 ****** 2026-01-10 14:55:16.636571 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:55:16.636579 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:55:16.636585 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:55:16.636592 | orchestrator | 2026-01-10 14:55:16.636598 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:55:16.636604 | orchestrator | Saturday 10 January 2026 14:52:36 +0000 (0:00:00.456) 0:00:00.755 ****** 2026-01-10 14:55:16.636611 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-01-10 14:55:16.636618 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-01-10 14:55:16.636625 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-01-10 14:55:16.636631 | orchestrator | 2026-01-10 14:55:16.636638 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-01-10 14:55:16.636642 | orchestrator | 2026-01-10 14:55:16.636646 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-01-10 14:55:16.636650 | orchestrator | Saturday 10 January 2026 14:52:37 +0000 (0:00:00.710) 0:00:01.466 ****** 2026-01-10 14:55:16.636654 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:55:16.636661 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:55:16.636668 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:55:16.636678 | orchestrator | 2026-01-10 14:55:16.636684 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:55:16.636691 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:55:16.636699 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:55:16.636705 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:55:16.636710 | orchestrator | 2026-01-10 14:55:16.636716 | orchestrator | 2026-01-10 14:55:16.636722 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:55:16.636728 | orchestrator | Saturday 10 January 2026 14:54:41 +0000 (0:02:04.864) 0:02:06.330 ****** 2026-01-10 14:55:16.636734 | orchestrator | =============================================================================== 2026-01-10 14:55:16.636740 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 124.86s 2026-01-10 14:55:16.636746 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.71s 2026-01-10 14:55:16.636753 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.46s 2026-01-10 14:55:16.636759 | orchestrator | 2026-01-10 14:55:16.636765 | orchestrator | 2026-01-10 14:55:16.636772 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:55:16.636778 | orchestrator | 2026-01-10 14:55:16.636807 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:55:16.636813 | orchestrator | Saturday 10 January 2026 14:53:04 +0000 (0:00:00.273) 0:00:00.273 ****** 2026-01-10 14:55:16.636819 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:55:16.636825 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:55:16.636831 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:55:16.636838 | orchestrator | 2026-01-10 14:55:16.636845 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:55:16.636852 | orchestrator | Saturday 10 January 2026 14:53:04 +0000 (0:00:00.294) 0:00:00.568 ****** 2026-01-10 14:55:16.636858 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-01-10 14:55:16.636866 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-01-10 14:55:16.636870 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-01-10 14:55:16.636874 | orchestrator | 2026-01-10 14:55:16.636877 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-01-10 14:55:16.636881 | orchestrator | 2026-01-10 14:55:16.636885 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-10 14:55:16.636895 | orchestrator | Saturday 10 January 2026 14:53:04 +0000 (0:00:00.448) 0:00:01.016 ****** 2026-01-10 14:55:16.636910 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:55:16.636916 | orchestrator | 2026-01-10 14:55:16.636923 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-01-10 14:55:16.636929 | orchestrator | Saturday 10 January 2026 14:53:05 +0000 (0:00:00.600) 0:00:01.617 ****** 2026-01-10 14:55:16.636939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-10 14:55:16.637047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-10 14:55:16.637058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-10 14:55:16.637066 | orchestrator | 2026-01-10 14:55:16.637072 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-01-10 14:55:16.637078 | orchestrator | Saturday 10 January 2026 14:53:06 +0000 (0:00:00.705) 0:00:02.322 ****** 2026-01-10 14:55:16.637085 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-01-10 14:55:16.637140 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-01-10 14:55:16.637144 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:55:16.637148 | orchestrator | 2026-01-10 14:55:16.637152 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-10 14:55:16.637155 | orchestrator | Saturday 10 January 2026 14:53:07 +0000 (0:00:00.827) 0:00:03.149 ****** 2026-01-10 14:55:16.637159 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:55:16.637163 | orchestrator | 2026-01-10 14:55:16.637167 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-01-10 14:55:16.637171 | orchestrator | Saturday 10 January 2026 14:53:07 +0000 (0:00:00.733) 0:00:03.883 ****** 2026-01-10 14:55:16.637175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-10 14:55:16.637184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-10 14:55:16.637191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-10 14:55:16.637195 | orchestrator | 2026-01-10 14:55:16.637203 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-01-10 14:55:16.637208 | orchestrator | Saturday 10 January 2026 14:53:08 +0000 (0:00:01.122) 0:00:05.005 ****** 2026-01-10 14:55:16.637212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-10 14:55:16.637216 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.637220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-10 14:55:16.637224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-10 14:55:16.637231 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.637236 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.637240 | orchestrator | 2026-01-10 14:55:16.637244 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-01-10 14:55:16.637248 | orchestrator | Saturday 10 January 2026 14:53:09 +0000 (0:00:00.350) 0:00:05.355 ****** 2026-01-10 14:55:16.637253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-10 14:55:16.637257 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.637262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-10 14:55:16.637266 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.637276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-10 14:55:16.637281 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.637286 | orchestrator | 2026-01-10 14:55:16.637290 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-01-10 14:55:16.637294 | orchestrator | Saturday 10 January 2026 14:53:10 +0000 (0:00:01.022) 0:00:06.378 ****** 2026-01-10 14:55:16.637299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-10 14:55:16.637303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-10 14:55:16.637311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-10 14:55:16.637316 | orchestrator | 2026-01-10 14:55:16.637323 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-01-10 14:55:16.637329 | orchestrator | Saturday 10 January 2026 14:53:11 +0000 (0:00:01.331) 0:00:07.709 ****** 2026-01-10 14:55:16.637335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-10 14:55:16.637347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-10 14:55:16.637353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-10 14:55:16.637359 | orchestrator | 2026-01-10 14:55:16.637365 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-01-10 14:55:16.637371 | orchestrator | Saturday 10 January 2026 14:53:13 +0000 (0:00:01.453) 0:00:09.163 ****** 2026-01-10 14:55:16.637377 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.637389 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.637399 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.637405 | orchestrator | 2026-01-10 14:55:16.637411 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-01-10 14:55:16.637417 | orchestrator | Saturday 10 January 2026 14:53:13 +0000 (0:00:00.499) 0:00:09.662 ****** 2026-01-10 14:55:16.637422 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-10 14:55:16.637428 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-10 14:55:16.637434 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-10 14:55:16.637439 | orchestrator | 2026-01-10 14:55:16.637445 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-01-10 14:55:16.637451 | orchestrator | Saturday 10 January 2026 14:53:14 +0000 (0:00:01.252) 0:00:10.915 ****** 2026-01-10 14:55:16.637468 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-10 14:55:16.637475 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-10 14:55:16.637481 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-10 14:55:16.637487 | orchestrator | 2026-01-10 14:55:16.637493 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-01-10 14:55:16.637499 | orchestrator | Saturday 10 January 2026 14:53:16 +0000 (0:00:01.209) 0:00:12.124 ****** 2026-01-10 14:55:16.637505 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:55:16.637511 | orchestrator | 2026-01-10 14:55:16.637516 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-01-10 14:55:16.637522 | orchestrator | Saturday 10 January 2026 14:53:16 +0000 (0:00:00.736) 0:00:12.860 ****** 2026-01-10 14:55:16.637528 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-01-10 14:55:16.637533 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-01-10 14:55:16.637539 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:55:16.637545 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:55:16.637551 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:55:16.637558 | orchestrator | 2026-01-10 14:55:16.637563 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-01-10 14:55:16.637569 | orchestrator | Saturday 10 January 2026 14:53:17 +0000 (0:00:00.639) 0:00:13.499 ****** 2026-01-10 14:55:16.637574 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.637579 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.637585 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.637590 | orchestrator | 2026-01-10 14:55:16.637596 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-01-10 14:55:16.637602 | orchestrator | Saturday 10 January 2026 14:53:17 +0000 (0:00:00.425) 0:00:13.925 ****** 2026-01-10 14:55:16.637609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1314123, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3334577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.637627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1314123, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3334577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.637639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1314123, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3334577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.637646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1314182, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3534534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.637654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1314182, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3534534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.637660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1314182, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3534534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.637667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1314138, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3393834, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.637681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1314138, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3393834, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.637692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1314138, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3393834, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.637699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1314185, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3574579, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.637706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1314185, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3574579, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.637710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1314185, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3574579, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.637714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1314159, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3444576, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.637722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1314159, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3444576, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.637731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1314159, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3444576, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.637735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1314172, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3504684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.637739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1314172, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3504684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.637743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1314172, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3504684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.637747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1314120, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.331168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.637752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1314120, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.331168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1314120, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.331168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1314131, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3357344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1314131, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3357344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1314131, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3357344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1314142, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3394578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1314142, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3394578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1314142, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3394578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1314166, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3479576, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1314166, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3479576, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1314166, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3479576, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1314181, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3517733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1314181, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3517733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1314181, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3517733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1314135, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3374577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1314135, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3374577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1314135, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3374577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1314171, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3487236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1314171, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3487236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1314171, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3487236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1314162, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3468194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1314162, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3468194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1314162, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3468194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1314156, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3437488, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1314156, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3437488, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1314156, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3437488, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1314152, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3433175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1314152, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3433175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1314152, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3433175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1314168, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3487236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1314168, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3487236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1314168, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3487236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1314146, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3422863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1314146, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3422863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1314146, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3422863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1314177, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3509085, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1314177, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3509085, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1314177, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3509085, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1314381, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.404582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1314381, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.404582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1314381, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.404582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1314228, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.372458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1314228, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.372458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1314228, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.372458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1314210, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.360811, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1314210, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.360811, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1314210, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.360811, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1314262, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3786304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1314262, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3786304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1314262, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3786304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1314196, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3586736, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1314196, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3586736, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1314196, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3586736, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1314309, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3952308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1314309, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3952308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1314309, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3952308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1314265, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3892324, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1314265, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3892324, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1314265, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3892324, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1314324, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3957024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1314324, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3957024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1314324, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3957024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1314368, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4031055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1314368, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4031055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1314368, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4031055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1314300, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3928437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1314300, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3928437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1314300, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3928437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1314249, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3766315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1314249, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3766315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1314249, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3766315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1314220, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3666732, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1314220, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3666732, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1314220, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3666732, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1314245, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.373458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1314245, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.373458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1314245, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.373458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1314211, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.363458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1314211, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.363458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1314211, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.363458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1314257, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3786304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1314257, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3786304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1314257, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3786304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1314348, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4017534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1314348, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4017534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1314348, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.4017534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1314332, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3994029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1314332, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3994029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1314332, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3994029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1314200, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3591764, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1314200, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3591764, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1314200, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3591764, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1314205, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.360811, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1314205, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.360811, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1314205, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.360811, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1314291, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.391458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1314291, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.391458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1314291, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.391458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1314328, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3957024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1314328, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3957024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1314328, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1768053707.3957024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:55:16.638746 | orchestrator | 2026-01-10 14:55:16.638753 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-01-10 14:55:16.638760 | orchestrator | Saturday 10 January 2026 14:53:53 +0000 (0:00:35.357) 0:00:49.283 ****** 2026-01-10 14:55:16.638797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-10 14:55:16.638805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-10 14:55:16.638815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-10 14:55:16.638821 | orchestrator | 2026-01-10 14:55:16.638827 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-01-10 14:55:16.638832 | orchestrator | Saturday 10 January 2026 14:53:54 +0000 (0:00:00.905) 0:00:50.189 ****** 2026-01-10 14:55:16.638838 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:16.638844 | orchestrator | 2026-01-10 14:55:16.638850 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-01-10 14:55:16.638856 | orchestrator | Saturday 10 January 2026 14:53:56 +0000 (0:00:02.187) 0:00:52.376 ****** 2026-01-10 14:55:16.638861 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:16.638867 | orchestrator | 2026-01-10 14:55:16.638873 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-10 14:55:16.638880 | orchestrator | Saturday 10 January 2026 14:53:58 +0000 (0:00:02.032) 0:00:54.408 ****** 2026-01-10 14:55:16.638885 | orchestrator | 2026-01-10 14:55:16.638891 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-10 14:55:16.638898 | orchestrator | Saturday 10 January 2026 14:53:58 +0000 (0:00:00.075) 0:00:54.484 ****** 2026-01-10 14:55:16.638904 | orchestrator | 2026-01-10 14:55:16.638910 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-10 14:55:16.638917 | orchestrator | Saturday 10 January 2026 14:53:58 +0000 (0:00:00.068) 0:00:54.552 ****** 2026-01-10 14:55:16.638922 | orchestrator | 2026-01-10 14:55:16.638928 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-01-10 14:55:16.638934 | orchestrator | Saturday 10 January 2026 14:53:58 +0000 (0:00:00.259) 0:00:54.812 ****** 2026-01-10 14:55:16.638939 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.638945 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.638951 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:16.638958 | orchestrator | 2026-01-10 14:55:16.638964 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-01-10 14:55:16.638970 | orchestrator | Saturday 10 January 2026 14:54:05 +0000 (0:00:06.778) 0:01:01.591 ****** 2026-01-10 14:55:16.638977 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.638983 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.638990 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-01-10 14:55:16.638996 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-01-10 14:55:16.639002 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-01-10 14:55:16.639009 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:55:16.639016 | orchestrator | 2026-01-10 14:55:16.639023 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-01-10 14:55:16.639030 | orchestrator | Saturday 10 January 2026 14:54:44 +0000 (0:00:38.739) 0:01:40.330 ****** 2026-01-10 14:55:16.639034 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.639041 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:55:16.639044 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:55:16.639048 | orchestrator | 2026-01-10 14:55:16.639052 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-01-10 14:55:16.639056 | orchestrator | Saturday 10 January 2026 14:55:10 +0000 (0:00:25.872) 0:02:06.203 ****** 2026-01-10 14:55:16.639059 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:55:16.639063 | orchestrator | 2026-01-10 14:55:16.639069 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-01-10 14:55:16.639073 | orchestrator | Saturday 10 January 2026 14:55:12 +0000 (0:00:02.524) 0:02:08.727 ****** 2026-01-10 14:55:16.639081 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.639085 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.639089 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.639092 | orchestrator | 2026-01-10 14:55:16.639096 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-01-10 14:55:16.639100 | orchestrator | Saturday 10 January 2026 14:55:13 +0000 (0:00:00.538) 0:02:09.265 ****** 2026-01-10 14:55:16.639104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-01-10 14:55:16.639109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-01-10 14:55:16.639114 | orchestrator | 2026-01-10 14:55:16.639118 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-01-10 14:55:16.639122 | orchestrator | Saturday 10 January 2026 14:55:15 +0000 (0:00:02.354) 0:02:11.620 ****** 2026-01-10 14:55:16.639126 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.639129 | orchestrator | 2026-01-10 14:55:16.639133 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:55:16.639137 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:55:16.639142 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:55:16.639145 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:55:16.639149 | orchestrator | 2026-01-10 14:55:16.639153 | orchestrator | 2026-01-10 14:55:16.639157 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:55:16.639160 | orchestrator | Saturday 10 January 2026 14:55:15 +0000 (0:00:00.272) 0:02:11.892 ****** 2026-01-10 14:55:16.639164 | orchestrator | =============================================================================== 2026-01-10 14:55:16.639168 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.74s 2026-01-10 14:55:16.639171 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 35.36s 2026-01-10 14:55:16.639175 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 25.87s 2026-01-10 14:55:16.639179 | orchestrator | grafana : Restart first grafana container ------------------------------- 6.78s 2026-01-10 14:55:16.639182 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.52s 2026-01-10 14:55:16.639186 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.35s 2026-01-10 14:55:16.639190 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.19s 2026-01-10 14:55:16.639194 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.03s 2026-01-10 14:55:16.639201 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.45s 2026-01-10 14:55:16.639204 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.33s 2026-01-10 14:55:16.639208 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.25s 2026-01-10 14:55:16.639212 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.21s 2026-01-10 14:55:16.639215 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.12s 2026-01-10 14:55:16.639219 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 1.02s 2026-01-10 14:55:16.639223 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.91s 2026-01-10 14:55:16.639227 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.83s 2026-01-10 14:55:16.639230 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.74s 2026-01-10 14:55:16.639234 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.73s 2026-01-10 14:55:16.639237 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.71s 2026-01-10 14:55:16.639244 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.64s 2026-01-10 14:55:16.639250 | orchestrator | 2026-01-10 14:55:16 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:19.678432 | orchestrator | 2026-01-10 14:55:19 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:55:19.680559 | orchestrator | 2026-01-10 14:55:19 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:55:19.680607 | orchestrator | 2026-01-10 14:55:19 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:22.719610 | orchestrator | 2026-01-10 14:55:22 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:55:22.724567 | orchestrator | 2026-01-10 14:55:22 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:55:22.724615 | orchestrator | 2026-01-10 14:55:22 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:25.765009 | orchestrator | 2026-01-10 14:55:25 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:55:25.768004 | orchestrator | 2026-01-10 14:55:25 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:55:25.768088 | orchestrator | 2026-01-10 14:55:25 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:28.805194 | orchestrator | 2026-01-10 14:55:28 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:55:28.805242 | orchestrator | 2026-01-10 14:55:28 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:55:28.805248 | orchestrator | 2026-01-10 14:55:28 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:31.851955 | orchestrator | 2026-01-10 14:55:31 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:55:31.854221 | orchestrator | 2026-01-10 14:55:31 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:55:31.854948 | orchestrator | 2026-01-10 14:55:31 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:34.895488 | orchestrator | 2026-01-10 14:55:34 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:55:34.895878 | orchestrator | 2026-01-10 14:55:34 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:55:34.896030 | orchestrator | 2026-01-10 14:55:34 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:37.927936 | orchestrator | 2026-01-10 14:55:37 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:55:37.929932 | orchestrator | 2026-01-10 14:55:37 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:55:37.930006 | orchestrator | 2026-01-10 14:55:37 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:40.975708 | orchestrator | 2026-01-10 14:55:40 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:55:40.975814 | orchestrator | 2026-01-10 14:55:40 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:55:40.975827 | orchestrator | 2026-01-10 14:55:40 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:44.033969 | orchestrator | 2026-01-10 14:55:44 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:55:44.035154 | orchestrator | 2026-01-10 14:55:44 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:55:44.035191 | orchestrator | 2026-01-10 14:55:44 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:47.070378 | orchestrator | 2026-01-10 14:55:47 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:55:47.070463 | orchestrator | 2026-01-10 14:55:47 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:55:47.070469 | orchestrator | 2026-01-10 14:55:47 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:50.105085 | orchestrator | 2026-01-10 14:55:50 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:55:50.107160 | orchestrator | 2026-01-10 14:55:50 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:55:50.107227 | orchestrator | 2026-01-10 14:55:50 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:53.156486 | orchestrator | 2026-01-10 14:55:53 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:55:53.157282 | orchestrator | 2026-01-10 14:55:53 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:55:53.157518 | orchestrator | 2026-01-10 14:55:53 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:56.216951 | orchestrator | 2026-01-10 14:55:56 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:55:56.217017 | orchestrator | 2026-01-10 14:55:56 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:55:56.217023 | orchestrator | 2026-01-10 14:55:56 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:59.263503 | orchestrator | 2026-01-10 14:55:59 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:55:59.263893 | orchestrator | 2026-01-10 14:55:59 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:55:59.263911 | orchestrator | 2026-01-10 14:55:59 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:56:02.302762 | orchestrator | 2026-01-10 14:56:02 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:56:02.304643 | orchestrator | 2026-01-10 14:56:02 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:56:02.304708 | orchestrator | 2026-01-10 14:56:02 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:56:05.344648 | orchestrator | 2026-01-10 14:56:05 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:56:05.345053 | orchestrator | 2026-01-10 14:56:05 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:56:05.345080 | orchestrator | 2026-01-10 14:56:05 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:56:08.383852 | orchestrator | 2026-01-10 14:56:08 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:56:08.385953 | orchestrator | 2026-01-10 14:56:08 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:56:08.386082 | orchestrator | 2026-01-10 14:56:08 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:56:11.427774 | orchestrator | 2026-01-10 14:56:11 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:56:11.428711 | orchestrator | 2026-01-10 14:56:11 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:56:11.428921 | orchestrator | 2026-01-10 14:56:11 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:56:14.474346 | orchestrator | 2026-01-10 14:56:14 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:56:14.476720 | orchestrator | 2026-01-10 14:56:14 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:56:14.476782 | orchestrator | 2026-01-10 14:56:14 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:56:17.530533 | orchestrator | 2026-01-10 14:56:17 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:56:17.532431 | orchestrator | 2026-01-10 14:56:17 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:56:17.532480 | orchestrator | 2026-01-10 14:56:17 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:56:20.576342 | orchestrator | 2026-01-10 14:56:20 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:56:20.578409 | orchestrator | 2026-01-10 14:56:20 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:56:20.578466 | orchestrator | 2026-01-10 14:56:20 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:56:23.621992 | orchestrator | 2026-01-10 14:56:23 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:56:23.623841 | orchestrator | 2026-01-10 14:56:23 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:56:23.623905 | orchestrator | 2026-01-10 14:56:23 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:56:26.671685 | orchestrator | 2026-01-10 14:56:26 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:56:26.674927 | orchestrator | 2026-01-10 14:56:26 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:56:26.675570 | orchestrator | 2026-01-10 14:56:26 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:56:29.728380 | orchestrator | 2026-01-10 14:56:29 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:56:29.729569 | orchestrator | 2026-01-10 14:56:29 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:56:29.729645 | orchestrator | 2026-01-10 14:56:29 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:56:32.771670 | orchestrator | 2026-01-10 14:56:32 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:56:32.773023 | orchestrator | 2026-01-10 14:56:32 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:56:32.773083 | orchestrator | 2026-01-10 14:56:32 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:56:35.817836 | orchestrator | 2026-01-10 14:56:35 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:56:35.820966 | orchestrator | 2026-01-10 14:56:35 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:56:35.821083 | orchestrator | 2026-01-10 14:56:35 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:56:38.866744 | orchestrator | 2026-01-10 14:56:38 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:56:38.868185 | orchestrator | 2026-01-10 14:56:38 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:56:38.868281 | orchestrator | 2026-01-10 14:56:38 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:56:41.913544 | orchestrator | 2026-01-10 14:56:41 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:56:41.913606 | orchestrator | 2026-01-10 14:56:41 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:56:41.914140 | orchestrator | 2026-01-10 14:56:41 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:56:44.955200 | orchestrator | 2026-01-10 14:56:44 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:56:44.957068 | orchestrator | 2026-01-10 14:56:44 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:56:44.957144 | orchestrator | 2026-01-10 14:56:44 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:56:48.022752 | orchestrator | 2026-01-10 14:56:48 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:56:48.024239 | orchestrator | 2026-01-10 14:56:48 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:56:48.024274 | orchestrator | 2026-01-10 14:56:48 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:56:51.073185 | orchestrator | 2026-01-10 14:56:51 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:56:51.075314 | orchestrator | 2026-01-10 14:56:51 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:56:51.075367 | orchestrator | 2026-01-10 14:56:51 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:56:54.115568 | orchestrator | 2026-01-10 14:56:54 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:56:54.118632 | orchestrator | 2026-01-10 14:56:54 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:56:54.118696 | orchestrator | 2026-01-10 14:56:54 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:56:57.162868 | orchestrator | 2026-01-10 14:56:57 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:56:57.164277 | orchestrator | 2026-01-10 14:56:57 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:56:57.164358 | orchestrator | 2026-01-10 14:56:57 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:57:00.209093 | orchestrator | 2026-01-10 14:57:00 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:57:00.212208 | orchestrator | 2026-01-10 14:57:00 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:57:00.212439 | orchestrator | 2026-01-10 14:57:00 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:57:03.259818 | orchestrator | 2026-01-10 14:57:03 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:57:03.263411 | orchestrator | 2026-01-10 14:57:03 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:57:03.263479 | orchestrator | 2026-01-10 14:57:03 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:57:06.310923 | orchestrator | 2026-01-10 14:57:06 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:57:06.312910 | orchestrator | 2026-01-10 14:57:06 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:57:06.312974 | orchestrator | 2026-01-10 14:57:06 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:57:09.355564 | orchestrator | 2026-01-10 14:57:09 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:57:09.358097 | orchestrator | 2026-01-10 14:57:09 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:57:09.358149 | orchestrator | 2026-01-10 14:57:09 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:57:12.406767 | orchestrator | 2026-01-10 14:57:12 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:57:12.408354 | orchestrator | 2026-01-10 14:57:12 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:57:12.408409 | orchestrator | 2026-01-10 14:57:12 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:57:15.456377 | orchestrator | 2026-01-10 14:57:15 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:57:15.458549 | orchestrator | 2026-01-10 14:57:15 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:57:15.458606 | orchestrator | 2026-01-10 14:57:15 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:57:18.500504 | orchestrator | 2026-01-10 14:57:18 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:57:18.506233 | orchestrator | 2026-01-10 14:57:18 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:57:18.506397 | orchestrator | 2026-01-10 14:57:18 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:57:21.537112 | orchestrator | 2026-01-10 14:57:21 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:57:21.537718 | orchestrator | 2026-01-10 14:57:21 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:57:21.537965 | orchestrator | 2026-01-10 14:57:21 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:57:24.585861 | orchestrator | 2026-01-10 14:57:24 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:57:24.588516 | orchestrator | 2026-01-10 14:57:24 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:57:24.588577 | orchestrator | 2026-01-10 14:57:24 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:57:27.631209 | orchestrator | 2026-01-10 14:57:27 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:57:27.633006 | orchestrator | 2026-01-10 14:57:27 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:57:27.633040 | orchestrator | 2026-01-10 14:57:27 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:57:30.677445 | orchestrator | 2026-01-10 14:57:30 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:57:30.678947 | orchestrator | 2026-01-10 14:57:30 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:57:30.679032 | orchestrator | 2026-01-10 14:57:30 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:57:33.725546 | orchestrator | 2026-01-10 14:57:33 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:57:33.726531 | orchestrator | 2026-01-10 14:57:33 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:57:33.726581 | orchestrator | 2026-01-10 14:57:33 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:57:36.772789 | orchestrator | 2026-01-10 14:57:36 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:57:36.773539 | orchestrator | 2026-01-10 14:57:36 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:57:36.773953 | orchestrator | 2026-01-10 14:57:36 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:57:39.816408 | orchestrator | 2026-01-10 14:57:39 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:57:39.817583 | orchestrator | 2026-01-10 14:57:39 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:57:39.818189 | orchestrator | 2026-01-10 14:57:39 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:57:42.859737 | orchestrator | 2026-01-10 14:57:42 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:57:42.860657 | orchestrator | 2026-01-10 14:57:42 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:57:42.860812 | orchestrator | 2026-01-10 14:57:42 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:57:45.895153 | orchestrator | 2026-01-10 14:57:45 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:57:45.895940 | orchestrator | 2026-01-10 14:57:45 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:57:45.896179 | orchestrator | 2026-01-10 14:57:45 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:57:48.934084 | orchestrator | 2026-01-10 14:57:48 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:57:48.937497 | orchestrator | 2026-01-10 14:57:48 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:57:48.937602 | orchestrator | 2026-01-10 14:57:48 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:57:51.977018 | orchestrator | 2026-01-10 14:57:51 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:57:51.978827 | orchestrator | 2026-01-10 14:57:51 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:57:51.978869 | orchestrator | 2026-01-10 14:57:51 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:57:55.031797 | orchestrator | 2026-01-10 14:57:55 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:57:55.034773 | orchestrator | 2026-01-10 14:57:55 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:57:55.034820 | orchestrator | 2026-01-10 14:57:55 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:57:58.083620 | orchestrator | 2026-01-10 14:57:58 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:57:58.086625 | orchestrator | 2026-01-10 14:57:58 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:57:58.087429 | orchestrator | 2026-01-10 14:57:58 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:58:01.138997 | orchestrator | 2026-01-10 14:58:01 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:58:01.140393 | orchestrator | 2026-01-10 14:58:01 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:58:01.140442 | orchestrator | 2026-01-10 14:58:01 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:58:04.191567 | orchestrator | 2026-01-10 14:58:04 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:58:04.196247 | orchestrator | 2026-01-10 14:58:04 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:58:04.196763 | orchestrator | 2026-01-10 14:58:04 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:58:07.243653 | orchestrator | 2026-01-10 14:58:07 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:58:07.243850 | orchestrator | 2026-01-10 14:58:07 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:58:07.243865 | orchestrator | 2026-01-10 14:58:07 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:58:10.270943 | orchestrator | 2026-01-10 14:58:10 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:58:10.271545 | orchestrator | 2026-01-10 14:58:10 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:58:10.271599 | orchestrator | 2026-01-10 14:58:10 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:58:13.296023 | orchestrator | 2026-01-10 14:58:13 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:58:13.296248 | orchestrator | 2026-01-10 14:58:13 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:58:13.296282 | orchestrator | 2026-01-10 14:58:13 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:58:16.323798 | orchestrator | 2026-01-10 14:58:16 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:58:16.327440 | orchestrator | 2026-01-10 14:58:16 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:58:16.327498 | orchestrator | 2026-01-10 14:58:16 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:58:19.346625 | orchestrator | 2026-01-10 14:58:19 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:58:19.346759 | orchestrator | 2026-01-10 14:58:19 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:58:19.346770 | orchestrator | 2026-01-10 14:58:19 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:58:22.382083 | orchestrator | 2026-01-10 14:58:22 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:58:22.383425 | orchestrator | 2026-01-10 14:58:22 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:58:22.383483 | orchestrator | 2026-01-10 14:58:22 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:58:25.432029 | orchestrator | 2026-01-10 14:58:25 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:58:25.435082 | orchestrator | 2026-01-10 14:58:25 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:58:25.435181 | orchestrator | 2026-01-10 14:58:25 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:58:28.480074 | orchestrator | 2026-01-10 14:58:28 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:58:28.482481 | orchestrator | 2026-01-10 14:58:28 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:58:28.482548 | orchestrator | 2026-01-10 14:58:28 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:58:31.531387 | orchestrator | 2026-01-10 14:58:31 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:58:31.532499 | orchestrator | 2026-01-10 14:58:31 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:58:31.532528 | orchestrator | 2026-01-10 14:58:31 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:58:34.581682 | orchestrator | 2026-01-10 14:58:34 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:58:34.584723 | orchestrator | 2026-01-10 14:58:34 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:58:34.584791 | orchestrator | 2026-01-10 14:58:34 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:58:37.630854 | orchestrator | 2026-01-10 14:58:37 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:58:37.632063 | orchestrator | 2026-01-10 14:58:37 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:58:37.632620 | orchestrator | 2026-01-10 14:58:37 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:58:40.673959 | orchestrator | 2026-01-10 14:58:40 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:58:40.676545 | orchestrator | 2026-01-10 14:58:40 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:58:40.676618 | orchestrator | 2026-01-10 14:58:40 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:58:43.724046 | orchestrator | 2026-01-10 14:58:43 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:58:43.726005 | orchestrator | 2026-01-10 14:58:43 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:58:43.726193 | orchestrator | 2026-01-10 14:58:43 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:58:46.775173 | orchestrator | 2026-01-10 14:58:46 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:58:46.777309 | orchestrator | 2026-01-10 14:58:46 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:58:46.777510 | orchestrator | 2026-01-10 14:58:46 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:58:49.821243 | orchestrator | 2026-01-10 14:58:49 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:58:49.823308 | orchestrator | 2026-01-10 14:58:49 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:58:49.824006 | orchestrator | 2026-01-10 14:58:49 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:58:52.865719 | orchestrator | 2026-01-10 14:58:52 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:58:52.866454 | orchestrator | 2026-01-10 14:58:52 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:58:52.866502 | orchestrator | 2026-01-10 14:58:52 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:58:55.910157 | orchestrator | 2026-01-10 14:58:55 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:58:55.910948 | orchestrator | 2026-01-10 14:58:55 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:58:55.910984 | orchestrator | 2026-01-10 14:58:55 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:58:58.963602 | orchestrator | 2026-01-10 14:58:58 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:58:58.965262 | orchestrator | 2026-01-10 14:58:58 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state STARTED 2026-01-10 14:58:58.965299 | orchestrator | 2026-01-10 14:58:58 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:59:02.014412 | orchestrator | 2026-01-10 14:59:02 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:59:02.018450 | orchestrator | 2026-01-10 14:59:02 | INFO  | Task 70c83860-b1e8-40b8-86b4-a9207523af25 is in state SUCCESS 2026-01-10 14:59:02.020485 | orchestrator | 2026-01-10 14:59:02.020555 | orchestrator | 2026-01-10 14:59:02.020594 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:59:02.020603 | orchestrator | 2026-01-10 14:59:02.020622 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-01-10 14:59:02.020630 | orchestrator | Saturday 10 January 2026 14:50:36 +0000 (0:00:00.381) 0:00:00.381 ****** 2026-01-10 14:59:02.020637 | orchestrator | changed: [testbed-manager] 2026-01-10 14:59:02.020645 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:59:02.020651 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:59:02.020658 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:59:02.020665 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:59:02.020671 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:59:02.020677 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:59:02.020684 | orchestrator | 2026-01-10 14:59:02.020690 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:59:02.020697 | orchestrator | Saturday 10 January 2026 14:50:37 +0000 (0:00:00.882) 0:00:01.264 ****** 2026-01-10 14:59:02.020704 | orchestrator | changed: [testbed-manager] 2026-01-10 14:59:02.020711 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:59:02.020718 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:59:02.020725 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:59:02.020809 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:59:02.020816 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:59:02.020822 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:59:02.020828 | orchestrator | 2026-01-10 14:59:02.020835 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:59:02.020841 | orchestrator | Saturday 10 January 2026 14:50:38 +0000 (0:00:00.709) 0:00:01.974 ****** 2026-01-10 14:59:02.020848 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-01-10 14:59:02.020855 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-01-10 14:59:02.020862 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-01-10 14:59:02.020869 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-01-10 14:59:02.020875 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-01-10 14:59:02.020882 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-01-10 14:59:02.020888 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-01-10 14:59:02.020895 | orchestrator | 2026-01-10 14:59:02.020901 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-01-10 14:59:02.020908 | orchestrator | 2026-01-10 14:59:02.020915 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-01-10 14:59:02.020921 | orchestrator | Saturday 10 January 2026 14:50:39 +0000 (0:00:01.109) 0:00:03.083 ****** 2026-01-10 14:59:02.020928 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:59:02.020935 | orchestrator | 2026-01-10 14:59:02.020942 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-01-10 14:59:02.020959 | orchestrator | Saturday 10 January 2026 14:50:40 +0000 (0:00:00.887) 0:00:03.971 ****** 2026-01-10 14:59:02.020967 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-01-10 14:59:02.020974 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-01-10 14:59:02.020981 | orchestrator | 2026-01-10 14:59:02.020988 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-01-10 14:59:02.020994 | orchestrator | Saturday 10 January 2026 14:50:44 +0000 (0:00:04.567) 0:00:08.539 ****** 2026-01-10 14:59:02.021001 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-10 14:59:02.021008 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-10 14:59:02.021014 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:59:02.021020 | orchestrator | 2026-01-10 14:59:02.021027 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-01-10 14:59:02.021034 | orchestrator | Saturday 10 January 2026 14:50:49 +0000 (0:00:04.372) 0:00:12.911 ****** 2026-01-10 14:59:02.021182 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:59:02.021211 | orchestrator | 2026-01-10 14:59:02.021219 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-01-10 14:59:02.021227 | orchestrator | Saturday 10 January 2026 14:50:49 +0000 (0:00:00.720) 0:00:13.632 ****** 2026-01-10 14:59:02.021234 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:59:02.021242 | orchestrator | 2026-01-10 14:59:02.021249 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-01-10 14:59:02.021256 | orchestrator | Saturday 10 January 2026 14:50:51 +0000 (0:00:01.617) 0:00:15.249 ****** 2026-01-10 14:59:02.021263 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:59:02.021270 | orchestrator | 2026-01-10 14:59:02.021278 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-10 14:59:02.021286 | orchestrator | Saturday 10 January 2026 14:50:54 +0000 (0:00:02.630) 0:00:17.879 ****** 2026-01-10 14:59:02.021294 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.021301 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.021308 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.021315 | orchestrator | 2026-01-10 14:59:02.021323 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-01-10 14:59:02.021330 | orchestrator | Saturday 10 January 2026 14:50:54 +0000 (0:00:00.485) 0:00:18.365 ****** 2026-01-10 14:59:02.021336 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:59:02.021343 | orchestrator | 2026-01-10 14:59:02.021349 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-01-10 14:59:02.021355 | orchestrator | Saturday 10 January 2026 14:51:25 +0000 (0:00:30.919) 0:00:49.284 ****** 2026-01-10 14:59:02.021361 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:59:02.021367 | orchestrator | 2026-01-10 14:59:02.021373 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-10 14:59:02.021382 | orchestrator | Saturday 10 January 2026 14:51:41 +0000 (0:00:15.966) 0:01:05.250 ****** 2026-01-10 14:59:02.021388 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:59:02.021395 | orchestrator | 2026-01-10 14:59:02.021401 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-10 14:59:02.021408 | orchestrator | Saturday 10 January 2026 14:51:54 +0000 (0:00:13.398) 0:01:18.649 ****** 2026-01-10 14:59:02.021432 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:59:02.021438 | orchestrator | 2026-01-10 14:59:02.021445 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-01-10 14:59:02.021458 | orchestrator | Saturday 10 January 2026 14:51:56 +0000 (0:00:01.205) 0:01:19.854 ****** 2026-01-10 14:59:02.021465 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.021471 | orchestrator | 2026-01-10 14:59:02.021476 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-10 14:59:02.021482 | orchestrator | Saturday 10 January 2026 14:51:56 +0000 (0:00:00.490) 0:01:20.344 ****** 2026-01-10 14:59:02.021489 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:59:02.021495 | orchestrator | 2026-01-10 14:59:02.021501 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-01-10 14:59:02.021507 | orchestrator | Saturday 10 January 2026 14:51:57 +0000 (0:00:00.485) 0:01:20.830 ****** 2026-01-10 14:59:02.021513 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:59:02.021519 | orchestrator | 2026-01-10 14:59:02.021525 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-01-10 14:59:02.021531 | orchestrator | Saturday 10 January 2026 14:52:16 +0000 (0:00:19.561) 0:01:40.391 ****** 2026-01-10 14:59:02.021537 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.021543 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.021549 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.021555 | orchestrator | 2026-01-10 14:59:02.021561 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-01-10 14:59:02.021567 | orchestrator | 2026-01-10 14:59:02.021573 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-01-10 14:59:02.021595 | orchestrator | Saturday 10 January 2026 14:52:17 +0000 (0:00:00.320) 0:01:40.711 ****** 2026-01-10 14:59:02.021602 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:59:02.021608 | orchestrator | 2026-01-10 14:59:02.021613 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-01-10 14:59:02.021620 | orchestrator | Saturday 10 January 2026 14:52:17 +0000 (0:00:00.592) 0:01:41.304 ****** 2026-01-10 14:59:02.021626 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.021632 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.021639 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:59:02.021646 | orchestrator | 2026-01-10 14:59:02.021652 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-01-10 14:59:02.021659 | orchestrator | Saturday 10 January 2026 14:52:19 +0000 (0:00:01.945) 0:01:43.249 ****** 2026-01-10 14:59:02.021665 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.021672 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.021679 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:59:02.021685 | orchestrator | 2026-01-10 14:59:02.021692 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-01-10 14:59:02.021698 | orchestrator | Saturday 10 January 2026 14:52:21 +0000 (0:00:02.153) 0:01:45.402 ****** 2026-01-10 14:59:02.021705 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.021711 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.021718 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.021725 | orchestrator | 2026-01-10 14:59:02.021732 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-01-10 14:59:02.021738 | orchestrator | Saturday 10 January 2026 14:52:22 +0000 (0:00:00.324) 0:01:45.727 ****** 2026-01-10 14:59:02.021746 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-10 14:59:02.021752 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.021759 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-10 14:59:02.021765 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.021771 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-10 14:59:02.021777 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-01-10 14:59:02.021785 | orchestrator | 2026-01-10 14:59:02.021792 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-01-10 14:59:02.021799 | orchestrator | Saturday 10 January 2026 14:52:29 +0000 (0:00:07.607) 0:01:53.335 ****** 2026-01-10 14:59:02.021805 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.021812 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.021818 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.021824 | orchestrator | 2026-01-10 14:59:02.021830 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-01-10 14:59:02.021836 | orchestrator | Saturday 10 January 2026 14:52:30 +0000 (0:00:00.392) 0:01:53.727 ****** 2026-01-10 14:59:02.021843 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-10 14:59:02.021849 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.021855 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-10 14:59:02.021862 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.021868 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-10 14:59:02.021874 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.021880 | orchestrator | 2026-01-10 14:59:02.021887 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-01-10 14:59:02.021893 | orchestrator | Saturday 10 January 2026 14:52:30 +0000 (0:00:00.747) 0:01:54.475 ****** 2026-01-10 14:59:02.021899 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.021905 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.021911 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:59:02.021917 | orchestrator | 2026-01-10 14:59:02.021923 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-01-10 14:59:02.021930 | orchestrator | Saturday 10 January 2026 14:52:31 +0000 (0:00:00.697) 0:01:55.172 ****** 2026-01-10 14:59:02.021942 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.021949 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.021955 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:59:02.021961 | orchestrator | 2026-01-10 14:59:02.021968 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-01-10 14:59:02.021974 | orchestrator | Saturday 10 January 2026 14:52:32 +0000 (0:00:01.108) 0:01:56.280 ****** 2026-01-10 14:59:02.021979 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.021986 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.022002 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:59:02.022009 | orchestrator | 2026-01-10 14:59:02.022072 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-01-10 14:59:02.022102 | orchestrator | Saturday 10 January 2026 14:52:35 +0000 (0:00:03.153) 0:01:59.434 ****** 2026-01-10 14:59:02.022109 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.022115 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.022122 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:59:02.022128 | orchestrator | 2026-01-10 14:59:02.022135 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-10 14:59:02.022141 | orchestrator | Saturday 10 January 2026 14:52:57 +0000 (0:00:21.910) 0:02:21.345 ****** 2026-01-10 14:59:02.022147 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.022154 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.022160 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:59:02.022167 | orchestrator | 2026-01-10 14:59:02.022174 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-10 14:59:02.022181 | orchestrator | Saturday 10 January 2026 14:53:09 +0000 (0:00:12.194) 0:02:33.540 ****** 2026-01-10 14:59:02.022188 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:59:02.022195 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.022202 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.022207 | orchestrator | 2026-01-10 14:59:02.022213 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-01-10 14:59:02.022219 | orchestrator | Saturday 10 January 2026 14:53:11 +0000 (0:00:01.465) 0:02:35.005 ****** 2026-01-10 14:59:02.022225 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.022231 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.022236 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:59:02.022242 | orchestrator | 2026-01-10 14:59:02.022249 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-01-10 14:59:02.022256 | orchestrator | Saturday 10 January 2026 14:53:23 +0000 (0:00:12.170) 0:02:47.175 ****** 2026-01-10 14:59:02.022263 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.022270 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.022277 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.022285 | orchestrator | 2026-01-10 14:59:02.022292 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-01-10 14:59:02.022299 | orchestrator | Saturday 10 January 2026 14:53:24 +0000 (0:00:01.098) 0:02:48.274 ****** 2026-01-10 14:59:02.022306 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.022312 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.022319 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.022326 | orchestrator | 2026-01-10 14:59:02.022332 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-01-10 14:59:02.022339 | orchestrator | 2026-01-10 14:59:02.022346 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-10 14:59:02.022353 | orchestrator | Saturday 10 January 2026 14:53:25 +0000 (0:00:00.533) 0:02:48.807 ****** 2026-01-10 14:59:02.022361 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:59:02.022370 | orchestrator | 2026-01-10 14:59:02.022376 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-01-10 14:59:02.022393 | orchestrator | Saturday 10 January 2026 14:53:25 +0000 (0:00:00.585) 0:02:49.393 ****** 2026-01-10 14:59:02.022400 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-01-10 14:59:02.022407 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-01-10 14:59:02.022414 | orchestrator | 2026-01-10 14:59:02.022421 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-01-10 14:59:02.022428 | orchestrator | Saturday 10 January 2026 14:53:28 +0000 (0:00:03.247) 0:02:52.640 ****** 2026-01-10 14:59:02.022435 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-01-10 14:59:02.022452 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-01-10 14:59:02.022460 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-01-10 14:59:02.022467 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-01-10 14:59:02.022474 | orchestrator | 2026-01-10 14:59:02.022481 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-01-10 14:59:02.022488 | orchestrator | Saturday 10 January 2026 14:53:35 +0000 (0:00:06.683) 0:02:59.323 ****** 2026-01-10 14:59:02.022496 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-10 14:59:02.022503 | orchestrator | 2026-01-10 14:59:02.022510 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-01-10 14:59:02.022517 | orchestrator | Saturday 10 January 2026 14:53:39 +0000 (0:00:03.863) 0:03:03.186 ****** 2026-01-10 14:59:02.022524 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-10 14:59:02.022531 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-01-10 14:59:02.022538 | orchestrator | 2026-01-10 14:59:02.022545 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-01-10 14:59:02.022552 | orchestrator | Saturday 10 January 2026 14:53:43 +0000 (0:00:03.633) 0:03:06.819 ****** 2026-01-10 14:59:02.022559 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-10 14:59:02.022566 | orchestrator | 2026-01-10 14:59:02.022573 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-01-10 14:59:02.022580 | orchestrator | Saturday 10 January 2026 14:53:46 +0000 (0:00:03.524) 0:03:10.344 ****** 2026-01-10 14:59:02.022586 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-01-10 14:59:02.022593 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-01-10 14:59:02.022599 | orchestrator | 2026-01-10 14:59:02.022606 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-01-10 14:59:02.022620 | orchestrator | Saturday 10 January 2026 14:53:54 +0000 (0:00:07.858) 0:03:18.202 ****** 2026-01-10 14:59:02.022637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-10 14:59:02.022652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-10 14:59:02.022661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-10 14:59:02.022680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.022689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.022695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.022707 | orchestrator | 2026-01-10 14:59:02.022714 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-01-10 14:59:02.022721 | orchestrator | Saturday 10 January 2026 14:53:55 +0000 (0:00:01.155) 0:03:19.357 ****** 2026-01-10 14:59:02.022727 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.022734 | orchestrator | 2026-01-10 14:59:02.022742 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-01-10 14:59:02.022748 | orchestrator | Saturday 10 January 2026 14:53:55 +0000 (0:00:00.118) 0:03:19.475 ****** 2026-01-10 14:59:02.022755 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.022762 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.022769 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.022776 | orchestrator | 2026-01-10 14:59:02.022783 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-01-10 14:59:02.022790 | orchestrator | Saturday 10 January 2026 14:53:56 +0000 (0:00:00.282) 0:03:19.757 ****** 2026-01-10 14:59:02.022796 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:59:02.022803 | orchestrator | 2026-01-10 14:59:02.022810 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-01-10 14:59:02.022817 | orchestrator | Saturday 10 January 2026 14:53:56 +0000 (0:00:00.906) 0:03:20.664 ****** 2026-01-10 14:59:02.022823 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.022830 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.022837 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.022844 | orchestrator | 2026-01-10 14:59:02.022851 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-10 14:59:02.022858 | orchestrator | Saturday 10 January 2026 14:53:57 +0000 (0:00:00.316) 0:03:20.980 ****** 2026-01-10 14:59:02.022865 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:59:02.022872 | orchestrator | 2026-01-10 14:59:02.022879 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-01-10 14:59:02.022886 | orchestrator | Saturday 10 January 2026 14:53:57 +0000 (0:00:00.536) 0:03:21.517 ****** 2026-01-10 14:59:02.022899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-10 14:59:02.022912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-10 14:59:02.022927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-10 14:59:02.022935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.022942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.022959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.023067 | orchestrator | 2026-01-10 14:59:02.023111 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-01-10 14:59:02.023119 | orchestrator | Saturday 10 January 2026 14:54:00 +0000 (0:00:02.494) 0:03:24.012 ****** 2026-01-10 14:59:02.023185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-10 14:59:02.023195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:59:02.023202 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.023209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-10 14:59:02.023225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:59:02.023239 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.023280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-10 14:59:02.023291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:59:02.023298 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.023305 | orchestrator | 2026-01-10 14:59:02.023311 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-01-10 14:59:02.023318 | orchestrator | Saturday 10 January 2026 14:54:01 +0000 (0:00:00.872) 0:03:24.884 ****** 2026-01-10 14:59:02.023324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-10 14:59:02.023331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:59:02.023412 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.023438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-10 14:59:02.023449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:59:02.023455 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.023462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-10 14:59:02.023469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:59:02.023483 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.023490 | orchestrator | 2026-01-10 14:59:02.023496 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-01-10 14:59:02.023503 | orchestrator | Saturday 10 January 2026 14:54:01 +0000 (0:00:00.788) 0:03:25.673 ****** 2026-01-10 14:59:02.023520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-10 14:59:02.023528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-10 14:59:02.023535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-10 14:59:02.023564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.023576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.023583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.023589 | orchestrator | 2026-01-10 14:59:02.023597 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-01-10 14:59:02.023603 | orchestrator | Saturday 10 January 2026 14:54:04 +0000 (0:00:02.441) 0:03:28.114 ****** 2026-01-10 14:59:02.023609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-10 14:59:02.023617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-10 14:59:02.023638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-10 14:59:02.023646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.023653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.023660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.023667 | orchestrator | 2026-01-10 14:59:02.023674 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-01-10 14:59:02.023681 | orchestrator | Saturday 10 January 2026 14:54:10 +0000 (0:00:05.701) 0:03:33.816 ****** 2026-01-10 14:59:02.023698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-10 14:59:02.023709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:59:02.023716 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.023723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-10 14:59:02.023730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:59:02.023738 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.023745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-10 14:59:02.023770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:59:02.023778 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.023785 | orchestrator | 2026-01-10 14:59:02.023792 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-01-10 14:59:02.023799 | orchestrator | Saturday 10 January 2026 14:54:10 +0000 (0:00:00.665) 0:03:34.481 ****** 2026-01-10 14:59:02.023806 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:59:02.023812 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:59:02.023819 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:59:02.023825 | orchestrator | 2026-01-10 14:59:02.023832 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-01-10 14:59:02.023838 | orchestrator | Saturday 10 January 2026 14:54:12 +0000 (0:00:01.475) 0:03:35.957 ****** 2026-01-10 14:59:02.023845 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.023851 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.023857 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.023864 | orchestrator | 2026-01-10 14:59:02.023870 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-01-10 14:59:02.023876 | orchestrator | Saturday 10 January 2026 14:54:12 +0000 (0:00:00.323) 0:03:36.280 ****** 2026-01-10 14:59:02.023883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-10 14:59:02.023900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-10 14:59:02.023920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-10 14:59:02.023930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.023937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.023944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.023958 | orchestrator | 2026-01-10 14:59:02.023966 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-10 14:59:02.023974 | orchestrator | Saturday 10 January 2026 14:54:14 +0000 (0:00:02.154) 0:03:38.434 ****** 2026-01-10 14:59:02.023980 | orchestrator | 2026-01-10 14:59:02.023987 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-10 14:59:02.023995 | orchestrator | Saturday 10 January 2026 14:54:14 +0000 (0:00:00.153) 0:03:38.587 ****** 2026-01-10 14:59:02.024001 | orchestrator | 2026-01-10 14:59:02.024008 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-10 14:59:02.024015 | orchestrator | Saturday 10 January 2026 14:54:15 +0000 (0:00:00.129) 0:03:38.717 ****** 2026-01-10 14:59:02.024023 | orchestrator | 2026-01-10 14:59:02.024030 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-01-10 14:59:02.024037 | orchestrator | Saturday 10 January 2026 14:54:15 +0000 (0:00:00.133) 0:03:38.851 ****** 2026-01-10 14:59:02.024044 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:59:02.024052 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:59:02.024059 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:59:02.024066 | orchestrator | 2026-01-10 14:59:02.024072 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-01-10 14:59:02.024078 | orchestrator | Saturday 10 January 2026 14:54:33 +0000 (0:00:18.469) 0:03:57.321 ****** 2026-01-10 14:59:02.024084 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:59:02.024122 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:59:02.024129 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:59:02.024136 | orchestrator | 2026-01-10 14:59:02.024142 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-01-10 14:59:02.024149 | orchestrator | 2026-01-10 14:59:02.024155 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-10 14:59:02.024160 | orchestrator | Saturday 10 January 2026 14:54:39 +0000 (0:00:06.068) 0:04:03.389 ****** 2026-01-10 14:59:02.024167 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:59:02.024175 | orchestrator | 2026-01-10 14:59:02.024188 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-10 14:59:02.024195 | orchestrator | Saturday 10 January 2026 14:54:40 +0000 (0:00:01.244) 0:04:04.634 ****** 2026-01-10 14:59:02.024202 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:59:02.024214 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:59:02.024221 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:59:02.024227 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.024234 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.024241 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.024247 | orchestrator | 2026-01-10 14:59:02.024254 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-01-10 14:59:02.024261 | orchestrator | Saturday 10 January 2026 14:54:41 +0000 (0:00:00.632) 0:04:05.266 ****** 2026-01-10 14:59:02.024267 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.024274 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.024280 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.024286 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:59:02.024292 | orchestrator | 2026-01-10 14:59:02.024298 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-10 14:59:02.024315 | orchestrator | Saturday 10 January 2026 14:54:42 +0000 (0:00:01.037) 0:04:06.304 ****** 2026-01-10 14:59:02.024323 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-01-10 14:59:02.024330 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-01-10 14:59:02.024336 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-01-10 14:59:02.024342 | orchestrator | 2026-01-10 14:59:02.024349 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-10 14:59:02.024355 | orchestrator | Saturday 10 January 2026 14:54:43 +0000 (0:00:00.728) 0:04:07.033 ****** 2026-01-10 14:59:02.024361 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-01-10 14:59:02.024368 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-01-10 14:59:02.024374 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-01-10 14:59:02.024380 | orchestrator | 2026-01-10 14:59:02.024387 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-10 14:59:02.024393 | orchestrator | Saturday 10 January 2026 14:54:44 +0000 (0:00:01.363) 0:04:08.397 ****** 2026-01-10 14:59:02.024400 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-01-10 14:59:02.024406 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:59:02.024413 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-01-10 14:59:02.024419 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:59:02.024426 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-01-10 14:59:02.024431 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:59:02.024438 | orchestrator | 2026-01-10 14:59:02.024445 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-01-10 14:59:02.024452 | orchestrator | Saturday 10 January 2026 14:54:45 +0000 (0:00:00.536) 0:04:08.934 ****** 2026-01-10 14:59:02.024458 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-10 14:59:02.024466 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-10 14:59:02.024472 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.024479 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-10 14:59:02.024485 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-10 14:59:02.024491 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.024498 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-10 14:59:02.024504 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-10 14:59:02.024510 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-10 14:59:02.024517 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.024524 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-10 14:59:02.024530 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-10 14:59:02.024537 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-10 14:59:02.024544 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-10 14:59:02.024550 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-10 14:59:02.024556 | orchestrator | 2026-01-10 14:59:02.024563 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-01-10 14:59:02.024570 | orchestrator | Saturday 10 January 2026 14:54:46 +0000 (0:00:01.338) 0:04:10.272 ****** 2026-01-10 14:59:02.024577 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.024583 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.024590 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.024597 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:59:02.024604 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:59:02.024611 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:59:02.024624 | orchestrator | 2026-01-10 14:59:02.024632 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-01-10 14:59:02.024639 | orchestrator | Saturday 10 January 2026 14:54:47 +0000 (0:00:01.215) 0:04:11.488 ****** 2026-01-10 14:59:02.024645 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.024652 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.024658 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.024664 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:59:02.024671 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:59:02.024678 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:59:02.024685 | orchestrator | 2026-01-10 14:59:02.024692 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-01-10 14:59:02.024698 | orchestrator | Saturday 10 January 2026 14:54:49 +0000 (0:00:01.961) 0:04:13.450 ****** 2026-01-10 14:59:02.025266 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-10 14:59:02.025308 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-10 14:59:02.025316 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-10 14:59:02.025323 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-10 14:59:02.025341 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-10 14:59:02.025363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:59:02.025370 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-10 14:59:02.025375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:59:02.025383 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.025390 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.025403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:59:02.025417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.025425 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.025431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.025439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.025445 | orchestrator | 2026-01-10 14:59:02.025452 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-10 14:59:02.025460 | orchestrator | Saturday 10 January 2026 14:54:52 +0000 (0:00:02.508) 0:04:15.958 ****** 2026-01-10 14:59:02.025467 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:59:02.025476 | orchestrator | 2026-01-10 14:59:02.025482 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-01-10 14:59:02.025493 | orchestrator | Saturday 10 January 2026 14:54:53 +0000 (0:00:01.222) 0:04:17.181 ****** 2026-01-10 14:59:02.025499 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-10 14:59:02.025515 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-10 14:59:02.025523 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-10 14:59:02.025530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:59:02.025536 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-10 14:59:02.025548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:59:02.025554 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-10 14:59:02.025568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:59:02.025575 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-10 14:59:02.025582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.025588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.025595 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.025606 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.025617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.025626 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.025633 | orchestrator | 2026-01-10 14:59:02.025639 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-01-10 14:59:02.025645 | orchestrator | Saturday 10 January 2026 14:54:56 +0000 (0:00:03.397) 0:04:20.578 ****** 2026-01-10 14:59:02.025651 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-10 14:59:02.025658 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-10 14:59:02.025671 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-10 14:59:02.025678 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:59:02.025689 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-10 14:59:02.025698 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-10 14:59:02.025705 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-10 14:59:02.025712 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:59:02.025719 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-10 14:59:02.025735 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-10 14:59:02.025741 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-10 14:59:02.025748 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:59:02.025762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-10 14:59:02.025769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:59:02.025776 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.025783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-10 14:59:02.025795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:59:02.025801 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.025808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-10 14:59:02.025816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:59:02.025823 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.025830 | orchestrator | 2026-01-10 14:59:02.025837 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-01-10 14:59:02.025844 | orchestrator | Saturday 10 January 2026 14:54:58 +0000 (0:00:01.631) 0:04:22.210 ****** 2026-01-10 14:59:02.025860 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-10 14:59:02.025867 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-10 14:59:02.025875 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-10 14:59:02.025888 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:59:02.025894 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-10 14:59:02.025901 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-10 14:59:02.025911 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-10 14:59:02.025918 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:59:02.025927 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-10 14:59:02.025939 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-10 14:59:02.025946 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-10 14:59:02.025952 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:59:02.025958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-10 14:59:02.025965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:59:02.025971 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.025985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-10 14:59:02.025992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:59:02.026004 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.026047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-10 14:59:02.026059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:59:02.026066 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.026072 | orchestrator | 2026-01-10 14:59:02.026078 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-10 14:59:02.026085 | orchestrator | Saturday 10 January 2026 14:55:00 +0000 (0:00:02.346) 0:04:24.556 ****** 2026-01-10 14:59:02.026115 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.026122 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.026128 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.026135 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:59:02.026142 | orchestrator | 2026-01-10 14:59:02.026195 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-01-10 14:59:02.026202 | orchestrator | Saturday 10 January 2026 14:55:01 +0000 (0:00:01.060) 0:04:25.617 ****** 2026-01-10 14:59:02.026208 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-10 14:59:02.026214 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-10 14:59:02.026220 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-10 14:59:02.026227 | orchestrator | 2026-01-10 14:59:02.026233 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-01-10 14:59:02.026240 | orchestrator | Saturday 10 January 2026 14:55:02 +0000 (0:00:00.964) 0:04:26.582 ****** 2026-01-10 14:59:02.026247 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-10 14:59:02.026253 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-10 14:59:02.026259 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-10 14:59:02.026265 | orchestrator | 2026-01-10 14:59:02.026272 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-01-10 14:59:02.026278 | orchestrator | Saturday 10 January 2026 14:55:03 +0000 (0:00:00.971) 0:04:27.553 ****** 2026-01-10 14:59:02.026284 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:59:02.026292 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:59:02.026298 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:59:02.026304 | orchestrator | 2026-01-10 14:59:02.026311 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-01-10 14:59:02.026317 | orchestrator | Saturday 10 January 2026 14:55:04 +0000 (0:00:00.536) 0:04:28.090 ****** 2026-01-10 14:59:02.026324 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:59:02.026330 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:59:02.026335 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:59:02.026341 | orchestrator | 2026-01-10 14:59:02.026347 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-01-10 14:59:02.026353 | orchestrator | Saturday 10 January 2026 14:55:05 +0000 (0:00:00.891) 0:04:28.981 ****** 2026-01-10 14:59:02.026370 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-10 14:59:02.026376 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-10 14:59:02.026381 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-10 14:59:02.026386 | orchestrator | 2026-01-10 14:59:02.026392 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-01-10 14:59:02.026407 | orchestrator | Saturday 10 January 2026 14:55:06 +0000 (0:00:01.159) 0:04:30.141 ****** 2026-01-10 14:59:02.026414 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-10 14:59:02.026421 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-10 14:59:02.026433 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-10 14:59:02.026439 | orchestrator | 2026-01-10 14:59:02.026444 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-01-10 14:59:02.026450 | orchestrator | Saturday 10 January 2026 14:55:07 +0000 (0:00:01.392) 0:04:31.533 ****** 2026-01-10 14:59:02.026455 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-10 14:59:02.026461 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-10 14:59:02.026466 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-10 14:59:02.026472 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-01-10 14:59:02.026477 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-01-10 14:59:02.026483 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-01-10 14:59:02.026489 | orchestrator | 2026-01-10 14:59:02.026495 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-01-10 14:59:02.026501 | orchestrator | Saturday 10 January 2026 14:55:12 +0000 (0:00:04.189) 0:04:35.722 ****** 2026-01-10 14:59:02.026507 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:59:02.026513 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:59:02.026519 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:59:02.026525 | orchestrator | 2026-01-10 14:59:02.026531 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-01-10 14:59:02.026537 | orchestrator | Saturday 10 January 2026 14:55:12 +0000 (0:00:00.521) 0:04:36.243 ****** 2026-01-10 14:59:02.026543 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:59:02.026549 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:59:02.026556 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:59:02.026562 | orchestrator | 2026-01-10 14:59:02.026568 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-01-10 14:59:02.026575 | orchestrator | Saturday 10 January 2026 14:55:12 +0000 (0:00:00.313) 0:04:36.557 ****** 2026-01-10 14:59:02.026581 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:59:02.026588 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:59:02.026594 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:59:02.026600 | orchestrator | 2026-01-10 14:59:02.026607 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-01-10 14:59:02.026613 | orchestrator | Saturday 10 January 2026 14:55:14 +0000 (0:00:01.210) 0:04:37.768 ****** 2026-01-10 14:59:02.026621 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-01-10 14:59:02.026629 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-01-10 14:59:02.026636 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-01-10 14:59:02.026642 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-01-10 14:59:02.026649 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-01-10 14:59:02.026656 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-01-10 14:59:02.026670 | orchestrator | 2026-01-10 14:59:02.026677 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-01-10 14:59:02.026684 | orchestrator | Saturday 10 January 2026 14:55:17 +0000 (0:00:03.351) 0:04:41.119 ****** 2026-01-10 14:59:02.026691 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-10 14:59:02.026697 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-10 14:59:02.026704 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-10 14:59:02.026711 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-10 14:59:02.026717 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:59:02.026724 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-10 14:59:02.026731 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:59:02.026737 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-10 14:59:02.026744 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:59:02.026751 | orchestrator | 2026-01-10 14:59:02.026757 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-01-10 14:59:02.026763 | orchestrator | Saturday 10 January 2026 14:55:20 +0000 (0:00:03.236) 0:04:44.356 ****** 2026-01-10 14:59:02.026770 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:59:02.026775 | orchestrator | 2026-01-10 14:59:02.026782 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-01-10 14:59:02.026788 | orchestrator | Saturday 10 January 2026 14:55:20 +0000 (0:00:00.125) 0:04:44.482 ****** 2026-01-10 14:59:02.026794 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:59:02.026801 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:59:02.026807 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:59:02.026813 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.026819 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.026825 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.026832 | orchestrator | 2026-01-10 14:59:02.026838 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-01-10 14:59:02.026845 | orchestrator | Saturday 10 January 2026 14:55:21 +0000 (0:00:00.585) 0:04:45.068 ****** 2026-01-10 14:59:02.026852 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-10 14:59:02.026858 | orchestrator | 2026-01-10 14:59:02.026864 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-01-10 14:59:02.026879 | orchestrator | Saturday 10 January 2026 14:55:22 +0000 (0:00:00.696) 0:04:45.764 ****** 2026-01-10 14:59:02.026886 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:59:02.026892 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:59:02.026899 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:59:02.026912 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.026918 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.026925 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.026932 | orchestrator | 2026-01-10 14:59:02.026939 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-01-10 14:59:02.026946 | orchestrator | Saturday 10 January 2026 14:55:22 +0000 (0:00:00.794) 0:04:46.559 ****** 2026-01-10 14:59:02.026954 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-10 14:59:02.026969 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-10 14:59:02.026977 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-10 14:59:02.026984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:59:02.027002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:59:02.027012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:59:02.027020 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-10 14:59:02.027176 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-10 14:59:02.027191 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-10 14:59:02.027200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.027208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.027230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.027238 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.027251 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.027258 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.027265 | orchestrator | 2026-01-10 14:59:02.027271 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-01-10 14:59:02.027277 | orchestrator | Saturday 10 January 2026 14:55:26 +0000 (0:00:03.684) 0:04:50.243 ****** 2026-01-10 14:59:02.027284 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-10 14:59:02.027300 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-10 14:59:02.027308 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-10 14:59:02.027319 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-10 14:59:02.027327 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-10 14:59:02.027334 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-10 14:59:02.027340 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.027572 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.027599 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.027607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:59:02.027615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:59:02.027621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:59:02.027628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.027645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.027658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.027665 | orchestrator | 2026-01-10 14:59:02.027672 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-01-10 14:59:02.027679 | orchestrator | Saturday 10 January 2026 14:55:32 +0000 (0:00:06.266) 0:04:56.510 ****** 2026-01-10 14:59:02.027686 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:59:02.027694 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:59:02.027700 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:59:02.027707 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.027713 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.027719 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.027726 | orchestrator | 2026-01-10 14:59:02.027733 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-01-10 14:59:02.027739 | orchestrator | Saturday 10 January 2026 14:55:34 +0000 (0:00:01.466) 0:04:57.976 ****** 2026-01-10 14:59:02.027746 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-10 14:59:02.027752 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-10 14:59:02.027759 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-10 14:59:02.027766 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-10 14:59:02.027772 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-10 14:59:02.027779 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-10 14:59:02.027787 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.027794 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-10 14:59:02.027800 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-10 14:59:02.027806 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.027813 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-10 14:59:02.027819 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.027826 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-10 14:59:02.027834 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-10 14:59:02.027841 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-10 14:59:02.027847 | orchestrator | 2026-01-10 14:59:02.027854 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-01-10 14:59:02.027861 | orchestrator | Saturday 10 January 2026 14:55:38 +0000 (0:00:03.977) 0:05:01.954 ****** 2026-01-10 14:59:02.027867 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:59:02.027874 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:59:02.027880 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:59:02.027886 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.027893 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.027899 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.027906 | orchestrator | 2026-01-10 14:59:02.027913 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-01-10 14:59:02.027923 | orchestrator | Saturday 10 January 2026 14:55:38 +0000 (0:00:00.621) 0:05:02.576 ****** 2026-01-10 14:59:02.027929 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-10 14:59:02.027935 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-10 14:59:02.027941 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-10 14:59:02.027948 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-10 14:59:02.027954 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-10 14:59:02.027965 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-10 14:59:02.027972 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-10 14:59:02.027982 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-10 14:59:02.027989 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-10 14:59:02.027996 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-10 14:59:02.028003 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.028009 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-10 14:59:02.028015 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.028020 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-10 14:59:02.028026 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.028032 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-10 14:59:02.028038 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-10 14:59:02.028044 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-10 14:59:02.028051 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-10 14:59:02.028057 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-10 14:59:02.028063 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-10 14:59:02.028069 | orchestrator | 2026-01-10 14:59:02.028075 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-01-10 14:59:02.028081 | orchestrator | Saturday 10 January 2026 14:55:44 +0000 (0:00:05.522) 0:05:08.098 ****** 2026-01-10 14:59:02.028139 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-10 14:59:02.028149 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-10 14:59:02.028155 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-10 14:59:02.028161 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-10 14:59:02.028168 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-10 14:59:02.028174 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-10 14:59:02.028181 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-10 14:59:02.028196 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-10 14:59:02.028202 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-10 14:59:02.028209 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-10 14:59:02.028216 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-10 14:59:02.028222 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-10 14:59:02.028229 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.028235 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-10 14:59:02.028242 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-10 14:59:02.028250 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-10 14:59:02.028256 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.028266 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-10 14:59:02.028272 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.028279 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-10 14:59:02.028285 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-10 14:59:02.028292 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-10 14:59:02.028298 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-10 14:59:02.028304 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-10 14:59:02.028312 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-10 14:59:02.028318 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-10 14:59:02.028325 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-10 14:59:02.028332 | orchestrator | 2026-01-10 14:59:02.028346 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-01-10 14:59:02.028354 | orchestrator | Saturday 10 January 2026 14:55:52 +0000 (0:00:07.605) 0:05:15.704 ****** 2026-01-10 14:59:02.028365 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:59:02.028373 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:59:02.028379 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:59:02.028386 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.028393 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.028400 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.028407 | orchestrator | 2026-01-10 14:59:02.028414 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-01-10 14:59:02.028421 | orchestrator | Saturday 10 January 2026 14:55:52 +0000 (0:00:00.800) 0:05:16.505 ****** 2026-01-10 14:59:02.028428 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:59:02.028435 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:59:02.028441 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:59:02.028448 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.028455 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.028462 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.028470 | orchestrator | 2026-01-10 14:59:02.028478 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-01-10 14:59:02.028484 | orchestrator | Saturday 10 January 2026 14:55:53 +0000 (0:00:00.649) 0:05:17.154 ****** 2026-01-10 14:59:02.028491 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:59:02.028497 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.028503 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.028510 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.028522 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:59:02.028528 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:59:02.028535 | orchestrator | 2026-01-10 14:59:02.028540 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-01-10 14:59:02.028546 | orchestrator | Saturday 10 January 2026 14:55:55 +0000 (0:00:02.074) 0:05:19.228 ****** 2026-01-10 14:59:02.028555 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-10 14:59:02.028565 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-10 14:59:02.028572 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-10 14:59:02.028586 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-10 14:59:02.028592 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-10 14:59:02.028637 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-10 14:59:02.028645 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:59:02.028651 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:59:02.028658 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-10 14:59:02.028665 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-10 14:59:02.028677 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-10 14:59:02.028684 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:59:02.028695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-10 14:59:02.028709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:59:02.028716 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.028722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-10 14:59:02.028729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:59:02.028735 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.028743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-10 14:59:02.028749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:59:02.028755 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.028761 | orchestrator | 2026-01-10 14:59:02.028767 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-01-10 14:59:02.028773 | orchestrator | Saturday 10 January 2026 14:55:57 +0000 (0:00:01.554) 0:05:20.783 ****** 2026-01-10 14:59:02.028779 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-01-10 14:59:02.028789 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-01-10 14:59:02.028795 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:59:02.028801 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-01-10 14:59:02.028817 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-01-10 14:59:02.028824 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:59:02.028831 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-01-10 14:59:02.028837 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-01-10 14:59:02.028843 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:59:02.028851 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-01-10 14:59:02.028857 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-01-10 14:59:02.028864 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.028870 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-01-10 14:59:02.028877 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-01-10 14:59:02.028883 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.028890 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-01-10 14:59:02.028897 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-01-10 14:59:02.028904 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.028911 | orchestrator | 2026-01-10 14:59:02.028918 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-01-10 14:59:02.028925 | orchestrator | Saturday 10 January 2026 14:55:57 +0000 (0:00:00.887) 0:05:21.670 ****** 2026-01-10 14:59:02.028933 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-10 14:59:02.028941 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-10 14:59:02.028948 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-10 14:59:02.028970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:59:02.028979 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-10 14:59:02.028986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:59:02.028994 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-10 14:59:02.029001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:59:02.029008 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-10 14:59:02.029016 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.029032 | orchestrator | changed: [testbed-node-1] => (item=2026-01-10 14:59:02 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:59:02.029044 | orchestrator | {'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.029052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.029060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.029067 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.029075 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:02.029112 | orchestrator | 2026-01-10 14:59:02.029122 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-10 14:59:02.029129 | orchestrator | Saturday 10 January 2026 14:56:00 +0000 (0:00:02.975) 0:05:24.646 ****** 2026-01-10 14:59:02.029136 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:59:02.029143 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:59:02.029149 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:59:02.029155 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.029162 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.029168 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.029175 | orchestrator | 2026-01-10 14:59:02.029181 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-10 14:59:02.029187 | orchestrator | Saturday 10 January 2026 14:56:01 +0000 (0:00:00.728) 0:05:25.374 ****** 2026-01-10 14:59:02.029194 | orchestrator | 2026-01-10 14:59:02.029200 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-10 14:59:02.029211 | orchestrator | Saturday 10 January 2026 14:56:01 +0000 (0:00:00.128) 0:05:25.503 ****** 2026-01-10 14:59:02.029218 | orchestrator | 2026-01-10 14:59:02.029225 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-10 14:59:02.029236 | orchestrator | Saturday 10 January 2026 14:56:01 +0000 (0:00:00.120) 0:05:25.624 ****** 2026-01-10 14:59:02.029243 | orchestrator | 2026-01-10 14:59:02.029250 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-10 14:59:02.029257 | orchestrator | Saturday 10 January 2026 14:56:02 +0000 (0:00:00.175) 0:05:25.799 ****** 2026-01-10 14:59:02.029264 | orchestrator | 2026-01-10 14:59:02.029271 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-10 14:59:02.029279 | orchestrator | Saturday 10 January 2026 14:56:02 +0000 (0:00:00.123) 0:05:25.922 ****** 2026-01-10 14:59:02.029286 | orchestrator | 2026-01-10 14:59:02.029292 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-10 14:59:02.029300 | orchestrator | Saturday 10 January 2026 14:56:02 +0000 (0:00:00.119) 0:05:26.042 ****** 2026-01-10 14:59:02.029306 | orchestrator | 2026-01-10 14:59:02.029313 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-01-10 14:59:02.029319 | orchestrator | Saturday 10 January 2026 14:56:02 +0000 (0:00:00.239) 0:05:26.281 ****** 2026-01-10 14:59:02.029325 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:59:02.029331 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:59:02.029337 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:59:02.029343 | orchestrator | 2026-01-10 14:59:02.029349 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-01-10 14:59:02.029354 | orchestrator | Saturday 10 January 2026 14:56:08 +0000 (0:00:06.305) 0:05:32.587 ****** 2026-01-10 14:59:02.029360 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:59:02.029366 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:59:02.029372 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:59:02.029379 | orchestrator | 2026-01-10 14:59:02.029385 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-01-10 14:59:02.029391 | orchestrator | Saturday 10 January 2026 14:56:22 +0000 (0:00:13.533) 0:05:46.120 ****** 2026-01-10 14:59:02.029398 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:59:02.029404 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:59:02.029410 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:59:02.029417 | orchestrator | 2026-01-10 14:59:02.029423 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-01-10 14:59:02.029429 | orchestrator | Saturday 10 January 2026 14:56:40 +0000 (0:00:18.346) 0:06:04.467 ****** 2026-01-10 14:59:02.029435 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:59:02.029441 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:59:02.029448 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:59:02.029460 | orchestrator | 2026-01-10 14:59:02.029466 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-01-10 14:59:02.029471 | orchestrator | Saturday 10 January 2026 14:57:09 +0000 (0:00:28.822) 0:06:33.289 ****** 2026-01-10 14:59:02.029477 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2026-01-10 14:59:02.029485 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:59:02.029491 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-01-10 14:59:02.029497 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:59:02.029503 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:59:02.029510 | orchestrator | 2026-01-10 14:59:02.029516 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-01-10 14:59:02.029522 | orchestrator | Saturday 10 January 2026 14:57:15 +0000 (0:00:06.169) 0:06:39.458 ****** 2026-01-10 14:59:02.029528 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:59:02.029534 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:59:02.029540 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:59:02.029547 | orchestrator | 2026-01-10 14:59:02.029553 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-01-10 14:59:02.029559 | orchestrator | Saturday 10 January 2026 14:57:16 +0000 (0:00:00.704) 0:06:40.163 ****** 2026-01-10 14:59:02.029565 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:59:02.029571 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:59:02.029577 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:59:02.029584 | orchestrator | 2026-01-10 14:59:02.029590 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-01-10 14:59:02.029596 | orchestrator | Saturday 10 January 2026 14:57:43 +0000 (0:00:26.551) 0:07:06.715 ****** 2026-01-10 14:59:02.029603 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:59:02.029609 | orchestrator | 2026-01-10 14:59:02.029616 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-01-10 14:59:02.029622 | orchestrator | Saturday 10 January 2026 14:57:43 +0000 (0:00:00.125) 0:07:06.840 ****** 2026-01-10 14:59:02.029629 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.029635 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.029641 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:59:02.029648 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:59:02.029654 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.029661 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-01-10 14:59:02.029669 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:59:02.029675 | orchestrator | 2026-01-10 14:59:02.029681 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-01-10 14:59:02.029688 | orchestrator | Saturday 10 January 2026 14:58:06 +0000 (0:00:23.297) 0:07:30.138 ****** 2026-01-10 14:59:02.029694 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.029700 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:59:02.029706 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:59:02.029712 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.029718 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.029724 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:59:02.029731 | orchestrator | 2026-01-10 14:59:02.029738 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-01-10 14:59:02.029751 | orchestrator | Saturday 10 January 2026 14:58:16 +0000 (0:00:10.001) 0:07:40.139 ****** 2026-01-10 14:59:02.029758 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:59:02.029764 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.029776 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.029782 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:59:02.029789 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.029796 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2026-01-10 14:59:02.029808 | orchestrator | 2026-01-10 14:59:02.029814 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-10 14:59:02.029820 | orchestrator | Saturday 10 January 2026 14:58:20 +0000 (0:00:03.673) 0:07:43.812 ****** 2026-01-10 14:59:02.029827 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:59:02.029833 | orchestrator | 2026-01-10 14:59:02.029840 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-10 14:59:02.029845 | orchestrator | Saturday 10 January 2026 14:58:35 +0000 (0:00:15.426) 0:07:59.238 ****** 2026-01-10 14:59:02.029852 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:59:02.029858 | orchestrator | 2026-01-10 14:59:02.029864 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-01-10 14:59:02.029870 | orchestrator | Saturday 10 January 2026 14:58:36 +0000 (0:00:01.377) 0:08:00.616 ****** 2026-01-10 14:59:02.029876 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:59:02.029882 | orchestrator | 2026-01-10 14:59:02.029889 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-01-10 14:59:02.029896 | orchestrator | Saturday 10 January 2026 14:58:38 +0000 (0:00:01.291) 0:08:01.908 ****** 2026-01-10 14:59:02.029902 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:59:02.029908 | orchestrator | 2026-01-10 14:59:02.029915 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-01-10 14:59:02.029922 | orchestrator | Saturday 10 January 2026 14:58:51 +0000 (0:00:13.772) 0:08:15.680 ****** 2026-01-10 14:59:02.029928 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:59:02.029935 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:59:02.029941 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:59:02.029948 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:59:02.029955 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:59:02.029961 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:59:02.029967 | orchestrator | 2026-01-10 14:59:02.029974 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-01-10 14:59:02.029980 | orchestrator | 2026-01-10 14:59:02.029987 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-01-10 14:59:02.029994 | orchestrator | Saturday 10 January 2026 14:58:54 +0000 (0:00:02.037) 0:08:17.718 ****** 2026-01-10 14:59:02.030000 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:59:02.030007 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:59:02.030013 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:59:02.030113 | orchestrator | 2026-01-10 14:59:02.030121 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-01-10 14:59:02.030127 | orchestrator | 2026-01-10 14:59:02.030132 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-01-10 14:59:02.030138 | orchestrator | Saturday 10 January 2026 14:58:55 +0000 (0:00:01.320) 0:08:19.038 ****** 2026-01-10 14:59:02.030152 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.030159 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.030164 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.030170 | orchestrator | 2026-01-10 14:59:02.030177 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-01-10 14:59:02.030183 | orchestrator | 2026-01-10 14:59:02.030190 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-01-10 14:59:02.030196 | orchestrator | Saturday 10 January 2026 14:58:55 +0000 (0:00:00.526) 0:08:19.564 ****** 2026-01-10 14:59:02.030203 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-01-10 14:59:02.030209 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-01-10 14:59:02.030216 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-01-10 14:59:02.030223 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-01-10 14:59:02.030229 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-01-10 14:59:02.030235 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-01-10 14:59:02.030250 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:59:02.030256 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-01-10 14:59:02.030262 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-01-10 14:59:02.030268 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-01-10 14:59:02.030275 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-01-10 14:59:02.030282 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-01-10 14:59:02.030288 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-01-10 14:59:02.030295 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:59:02.030301 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-01-10 14:59:02.030307 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-01-10 14:59:02.030314 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-01-10 14:59:02.030320 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-01-10 14:59:02.030327 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-01-10 14:59:02.030333 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-01-10 14:59:02.030340 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:59:02.030347 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-01-10 14:59:02.030353 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-01-10 14:59:02.030359 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-01-10 14:59:02.030375 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-01-10 14:59:02.030381 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-01-10 14:59:02.030388 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-01-10 14:59:02.030402 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.030408 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-01-10 14:59:02.030413 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-01-10 14:59:02.030419 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-01-10 14:59:02.030425 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-01-10 14:59:02.030431 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-01-10 14:59:02.030437 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-01-10 14:59:02.030443 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.030448 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-01-10 14:59:02.030454 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-01-10 14:59:02.030460 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-01-10 14:59:02.030465 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-01-10 14:59:02.030471 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-01-10 14:59:02.030477 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-01-10 14:59:02.030483 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.030488 | orchestrator | 2026-01-10 14:59:02.030493 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-01-10 14:59:02.030499 | orchestrator | 2026-01-10 14:59:02.030504 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-01-10 14:59:02.030510 | orchestrator | Saturday 10 January 2026 14:58:57 +0000 (0:00:01.327) 0:08:20.892 ****** 2026-01-10 14:59:02.030516 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-01-10 14:59:02.030521 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-01-10 14:59:02.030528 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.030533 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-01-10 14:59:02.030538 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-01-10 14:59:02.030551 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.030558 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-01-10 14:59:02.030564 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-01-10 14:59:02.030571 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.030578 | orchestrator | 2026-01-10 14:59:02.030584 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-01-10 14:59:02.030591 | orchestrator | 2026-01-10 14:59:02.030598 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-01-10 14:59:02.030604 | orchestrator | Saturday 10 January 2026 14:58:57 +0000 (0:00:00.741) 0:08:21.634 ****** 2026-01-10 14:59:02.030611 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.030617 | orchestrator | 2026-01-10 14:59:02.030623 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-01-10 14:59:02.030629 | orchestrator | 2026-01-10 14:59:02.030635 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-01-10 14:59:02.030641 | orchestrator | Saturday 10 January 2026 14:58:58 +0000 (0:00:00.675) 0:08:22.310 ****** 2026-01-10 14:59:02.030648 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:02.030654 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:02.030660 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:02.030666 | orchestrator | 2026-01-10 14:59:02.030672 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:59:02.030679 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:59:02.030688 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-01-10 14:59:02.030695 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-01-10 14:59:02.030702 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-01-10 14:59:02.030708 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-01-10 14:59:02.030715 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-01-10 14:59:02.030722 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-01-10 14:59:02.030729 | orchestrator | 2026-01-10 14:59:02.030735 | orchestrator | 2026-01-10 14:59:02.030742 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:59:02.030749 | orchestrator | Saturday 10 January 2026 14:58:59 +0000 (0:00:00.450) 0:08:22.760 ****** 2026-01-10 14:59:02.030756 | orchestrator | =============================================================================== 2026-01-10 14:59:02.030763 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 30.92s 2026-01-10 14:59:02.030770 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 28.82s 2026-01-10 14:59:02.030782 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 26.55s 2026-01-10 14:59:02.030788 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 23.30s 2026-01-10 14:59:02.030799 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.91s 2026-01-10 14:59:02.030805 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 19.56s 2026-01-10 14:59:02.030812 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 18.47s 2026-01-10 14:59:02.030819 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 18.35s 2026-01-10 14:59:02.030829 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.96s 2026-01-10 14:59:02.030836 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 15.43s 2026-01-10 14:59:02.030843 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 13.77s 2026-01-10 14:59:02.030849 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 13.53s 2026-01-10 14:59:02.030855 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.40s 2026-01-10 14:59:02.030862 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.19s 2026-01-10 14:59:02.030868 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.17s 2026-01-10 14:59:02.030875 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 10.00s 2026-01-10 14:59:02.030882 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.86s 2026-01-10 14:59:02.030889 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 7.61s 2026-01-10 14:59:02.030896 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 7.61s 2026-01-10 14:59:02.030902 | orchestrator | service-ks-register : nova | Creating endpoints ------------------------- 6.68s 2026-01-10 14:59:05.066648 | orchestrator | 2026-01-10 14:59:05 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:59:05.066723 | orchestrator | 2026-01-10 14:59:05 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:59:08.116417 | orchestrator | 2026-01-10 14:59:08 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:59:08.116493 | orchestrator | 2026-01-10 14:59:08 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:59:11.160454 | orchestrator | 2026-01-10 14:59:11 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:59:11.160511 | orchestrator | 2026-01-10 14:59:11 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:59:14.208800 | orchestrator | 2026-01-10 14:59:14 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:59:14.208862 | orchestrator | 2026-01-10 14:59:14 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:59:17.250953 | orchestrator | 2026-01-10 14:59:17 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:59:17.251044 | orchestrator | 2026-01-10 14:59:17 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:59:20.297190 | orchestrator | 2026-01-10 14:59:20 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:59:20.297263 | orchestrator | 2026-01-10 14:59:20 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:59:23.332596 | orchestrator | 2026-01-10 14:59:23 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:59:23.332646 | orchestrator | 2026-01-10 14:59:23 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:59:26.379778 | orchestrator | 2026-01-10 14:59:26 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:59:26.379877 | orchestrator | 2026-01-10 14:59:26 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:59:29.423402 | orchestrator | 2026-01-10 14:59:29 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:59:29.423470 | orchestrator | 2026-01-10 14:59:29 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:59:32.471289 | orchestrator | 2026-01-10 14:59:32 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:59:32.471385 | orchestrator | 2026-01-10 14:59:32 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:59:35.505426 | orchestrator | 2026-01-10 14:59:35 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:59:35.505482 | orchestrator | 2026-01-10 14:59:35 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:59:38.547891 | orchestrator | 2026-01-10 14:59:38 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:59:38.547986 | orchestrator | 2026-01-10 14:59:38 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:59:41.597669 | orchestrator | 2026-01-10 14:59:41 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state STARTED 2026-01-10 14:59:41.597752 | orchestrator | 2026-01-10 14:59:41 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:59:44.647255 | orchestrator | 2026-01-10 14:59:44 | INFO  | Task 8e3a721f-515c-40ab-a6e8-a989eaed8f4c is in state SUCCESS 2026-01-10 14:59:44.647352 | orchestrator | 2026-01-10 14:59:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:59:44.649103 | orchestrator | 2026-01-10 14:59:44.649152 | orchestrator | 2026-01-10 14:59:44.649161 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:59:44.649168 | orchestrator | 2026-01-10 14:59:44.649173 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:59:44.649179 | orchestrator | Saturday 10 January 2026 14:54:47 +0000 (0:00:00.423) 0:00:00.423 ****** 2026-01-10 14:59:44.649185 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:59:44.649192 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:59:44.649198 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:59:44.649277 | orchestrator | 2026-01-10 14:59:44.649287 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:59:44.649294 | orchestrator | Saturday 10 January 2026 14:54:47 +0000 (0:00:00.431) 0:00:00.855 ****** 2026-01-10 14:59:44.649301 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-01-10 14:59:44.649308 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-01-10 14:59:44.649314 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-01-10 14:59:44.649358 | orchestrator | 2026-01-10 14:59:44.649365 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-01-10 14:59:44.649371 | orchestrator | 2026-01-10 14:59:44.649378 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-10 14:59:44.649384 | orchestrator | Saturday 10 January 2026 14:54:48 +0000 (0:00:00.514) 0:00:01.370 ****** 2026-01-10 14:59:44.649391 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:59:44.649398 | orchestrator | 2026-01-10 14:59:44.649404 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-01-10 14:59:44.649411 | orchestrator | Saturday 10 January 2026 14:54:48 +0000 (0:00:00.625) 0:00:01.995 ****** 2026-01-10 14:59:44.649418 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-01-10 14:59:44.649424 | orchestrator | 2026-01-10 14:59:44.649430 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-01-10 14:59:44.649436 | orchestrator | Saturday 10 January 2026 14:54:51 +0000 (0:00:03.152) 0:00:05.147 ****** 2026-01-10 14:59:44.649454 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-01-10 14:59:44.649466 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-01-10 14:59:44.649473 | orchestrator | 2026-01-10 14:59:44.649480 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-01-10 14:59:44.649486 | orchestrator | Saturday 10 January 2026 14:54:57 +0000 (0:00:06.058) 0:00:11.206 ****** 2026-01-10 14:59:44.649493 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-10 14:59:44.649499 | orchestrator | 2026-01-10 14:59:44.649507 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-01-10 14:59:44.649531 | orchestrator | Saturday 10 January 2026 14:55:00 +0000 (0:00:02.721) 0:00:13.927 ****** 2026-01-10 14:59:44.649539 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-10 14:59:44.649546 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-01-10 14:59:44.649552 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-01-10 14:59:44.649558 | orchestrator | 2026-01-10 14:59:44.649565 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-01-10 14:59:44.649572 | orchestrator | Saturday 10 January 2026 14:55:08 +0000 (0:00:07.666) 0:00:21.593 ****** 2026-01-10 14:59:44.649578 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-10 14:59:44.649584 | orchestrator | 2026-01-10 14:59:44.649591 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-01-10 14:59:44.649598 | orchestrator | Saturday 10 January 2026 14:55:12 +0000 (0:00:04.064) 0:00:25.657 ****** 2026-01-10 14:59:44.649605 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-01-10 14:59:44.649613 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-01-10 14:59:44.649619 | orchestrator | 2026-01-10 14:59:44.649625 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-01-10 14:59:44.649631 | orchestrator | Saturday 10 January 2026 14:55:18 +0000 (0:00:06.616) 0:00:32.274 ****** 2026-01-10 14:59:44.649637 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-01-10 14:59:44.649642 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-01-10 14:59:44.649648 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-01-10 14:59:44.649654 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-01-10 14:59:44.649660 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-01-10 14:59:44.649666 | orchestrator | 2026-01-10 14:59:44.649672 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-10 14:59:44.649678 | orchestrator | Saturday 10 January 2026 14:55:34 +0000 (0:00:15.055) 0:00:47.329 ****** 2026-01-10 14:59:44.649685 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:59:44.649691 | orchestrator | 2026-01-10 14:59:44.649697 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-01-10 14:59:44.649704 | orchestrator | Saturday 10 January 2026 14:55:34 +0000 (0:00:00.772) 0:00:48.102 ****** 2026-01-10 14:59:44.649711 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:59:44.649717 | orchestrator | 2026-01-10 14:59:44.649723 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-01-10 14:59:44.649730 | orchestrator | Saturday 10 January 2026 14:55:40 +0000 (0:00:05.509) 0:00:53.611 ****** 2026-01-10 14:59:44.649737 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:59:44.649744 | orchestrator | 2026-01-10 14:59:44.649759 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-01-10 14:59:44.650259 | orchestrator | Saturday 10 January 2026 14:55:44 +0000 (0:00:04.531) 0:00:58.143 ****** 2026-01-10 14:59:44.650283 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:59:44.650290 | orchestrator | 2026-01-10 14:59:44.650297 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-01-10 14:59:44.650302 | orchestrator | Saturday 10 January 2026 14:55:47 +0000 (0:00:02.981) 0:01:01.124 ****** 2026-01-10 14:59:44.650306 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-01-10 14:59:44.650310 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-01-10 14:59:44.650313 | orchestrator | 2026-01-10 14:59:44.650317 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-01-10 14:59:44.650321 | orchestrator | Saturday 10 January 2026 14:55:58 +0000 (0:00:11.102) 0:01:12.227 ****** 2026-01-10 14:59:44.650325 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-01-10 14:59:44.650337 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-01-10 14:59:44.650342 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-01-10 14:59:44.650346 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-01-10 14:59:44.650350 | orchestrator | 2026-01-10 14:59:44.650353 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-01-10 14:59:44.650357 | orchestrator | Saturday 10 January 2026 14:56:16 +0000 (0:00:17.997) 0:01:30.225 ****** 2026-01-10 14:59:44.650362 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:59:44.650369 | orchestrator | 2026-01-10 14:59:44.650375 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-01-10 14:59:44.650381 | orchestrator | Saturday 10 January 2026 14:56:22 +0000 (0:00:05.122) 0:01:35.348 ****** 2026-01-10 14:59:44.650388 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:59:44.650394 | orchestrator | 2026-01-10 14:59:44.650400 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-01-10 14:59:44.650407 | orchestrator | Saturday 10 January 2026 14:56:28 +0000 (0:00:06.247) 0:01:41.595 ****** 2026-01-10 14:59:44.650416 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:44.650422 | orchestrator | 2026-01-10 14:59:44.650428 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-01-10 14:59:44.650435 | orchestrator | Saturday 10 January 2026 14:56:28 +0000 (0:00:00.287) 0:01:41.883 ****** 2026-01-10 14:59:44.650441 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:59:44.650447 | orchestrator | 2026-01-10 14:59:44.650452 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-10 14:59:44.650458 | orchestrator | Saturday 10 January 2026 14:56:32 +0000 (0:00:04.097) 0:01:45.981 ****** 2026-01-10 14:59:44.650465 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:59:44.650470 | orchestrator | 2026-01-10 14:59:44.650476 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-01-10 14:59:44.650482 | orchestrator | Saturday 10 January 2026 14:56:33 +0000 (0:00:01.034) 0:01:47.015 ****** 2026-01-10 14:59:44.650488 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:59:44.650493 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:59:44.650499 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:59:44.650506 | orchestrator | 2026-01-10 14:59:44.650512 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-01-10 14:59:44.650518 | orchestrator | Saturday 10 January 2026 14:56:39 +0000 (0:00:05.853) 0:01:52.869 ****** 2026-01-10 14:59:44.650524 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:59:44.650531 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:59:44.650537 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:59:44.650543 | orchestrator | 2026-01-10 14:59:44.650550 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-01-10 14:59:44.650556 | orchestrator | Saturday 10 January 2026 14:56:44 +0000 (0:00:04.443) 0:01:57.312 ****** 2026-01-10 14:59:44.650561 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:59:44.650567 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:59:44.650573 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:59:44.650580 | orchestrator | 2026-01-10 14:59:44.650586 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-01-10 14:59:44.650592 | orchestrator | Saturday 10 January 2026 14:56:44 +0000 (0:00:00.818) 0:01:58.131 ****** 2026-01-10 14:59:44.650599 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:59:44.650605 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:59:44.650611 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:59:44.650633 | orchestrator | 2026-01-10 14:59:44.650640 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-01-10 14:59:44.650658 | orchestrator | Saturday 10 January 2026 14:56:46 +0000 (0:00:01.982) 0:02:00.113 ****** 2026-01-10 14:59:44.650664 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:59:44.650671 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:59:44.650678 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:59:44.650682 | orchestrator | 2026-01-10 14:59:44.650686 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-01-10 14:59:44.650690 | orchestrator | Saturday 10 January 2026 14:56:48 +0000 (0:00:01.406) 0:02:01.520 ****** 2026-01-10 14:59:44.650693 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:59:44.650697 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:59:44.650701 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:59:44.650705 | orchestrator | 2026-01-10 14:59:44.650708 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-01-10 14:59:44.650712 | orchestrator | Saturday 10 January 2026 14:56:49 +0000 (0:00:01.268) 0:02:02.789 ****** 2026-01-10 14:59:44.650721 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:59:44.650725 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:59:44.650729 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:59:44.650733 | orchestrator | 2026-01-10 14:59:44.650762 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-01-10 14:59:44.650769 | orchestrator | Saturday 10 January 2026 14:56:51 +0000 (0:00:02.219) 0:02:05.008 ****** 2026-01-10 14:59:44.650776 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:59:44.650783 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:59:44.650790 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:59:44.650796 | orchestrator | 2026-01-10 14:59:44.650802 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-01-10 14:59:44.650809 | orchestrator | Saturday 10 January 2026 14:56:53 +0000 (0:00:01.672) 0:02:06.681 ****** 2026-01-10 14:59:44.650815 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:59:44.650821 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:59:44.650827 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:59:44.650834 | orchestrator | 2026-01-10 14:59:44.650840 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-01-10 14:59:44.650846 | orchestrator | Saturday 10 January 2026 14:56:54 +0000 (0:00:00.683) 0:02:07.364 ****** 2026-01-10 14:59:44.650852 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:59:44.650859 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:59:44.650865 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:59:44.650871 | orchestrator | 2026-01-10 14:59:44.650878 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-10 14:59:44.650884 | orchestrator | Saturday 10 January 2026 14:56:57 +0000 (0:00:02.991) 0:02:10.355 ****** 2026-01-10 14:59:44.650891 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:59:44.650898 | orchestrator | 2026-01-10 14:59:44.650904 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-01-10 14:59:44.650910 | orchestrator | Saturday 10 January 2026 14:56:57 +0000 (0:00:00.760) 0:02:11.116 ****** 2026-01-10 14:59:44.650917 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:59:44.650923 | orchestrator | 2026-01-10 14:59:44.650929 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-01-10 14:59:44.650936 | orchestrator | Saturday 10 January 2026 14:57:01 +0000 (0:00:04.038) 0:02:15.154 ****** 2026-01-10 14:59:44.650942 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:59:44.650949 | orchestrator | 2026-01-10 14:59:44.650955 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-01-10 14:59:44.650961 | orchestrator | Saturday 10 January 2026 14:57:05 +0000 (0:00:03.704) 0:02:18.859 ****** 2026-01-10 14:59:44.650968 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-01-10 14:59:44.650975 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-01-10 14:59:44.650980 | orchestrator | 2026-01-10 14:59:44.650984 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-01-10 14:59:44.650992 | orchestrator | Saturday 10 January 2026 14:57:12 +0000 (0:00:06.593) 0:02:25.452 ****** 2026-01-10 14:59:44.650996 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:59:44.651001 | orchestrator | 2026-01-10 14:59:44.651005 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-01-10 14:59:44.651009 | orchestrator | Saturday 10 January 2026 14:57:15 +0000 (0:00:03.624) 0:02:29.077 ****** 2026-01-10 14:59:44.651013 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:59:44.651032 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:59:44.651038 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:59:44.651044 | orchestrator | 2026-01-10 14:59:44.651050 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-01-10 14:59:44.651057 | orchestrator | Saturday 10 January 2026 14:57:16 +0000 (0:00:00.317) 0:02:29.394 ****** 2026-01-10 14:59:44.651065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:59:44.651091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:59:44.651097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:59:44.651103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:59:44.651111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:59:44.651115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:59:44.651120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:59:44.651125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:59:44.651145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:59:44.651155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:59:44.651164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:59:44.651175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:59:44.651181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:44.651188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:44.651211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:44.651218 | orchestrator | 2026-01-10 14:59:44.651224 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-01-10 14:59:44.651229 | orchestrator | Saturday 10 January 2026 14:57:18 +0000 (0:00:02.660) 0:02:32.054 ****** 2026-01-10 14:59:44.651235 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:44.651241 | orchestrator | 2026-01-10 14:59:44.651247 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-01-10 14:59:44.651253 | orchestrator | Saturday 10 January 2026 14:57:18 +0000 (0:00:00.148) 0:02:32.203 ****** 2026-01-10 14:59:44.651259 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:44.651265 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:44.651271 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:44.651276 | orchestrator | 2026-01-10 14:59:44.651282 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-01-10 14:59:44.651289 | orchestrator | Saturday 10 January 2026 14:57:19 +0000 (0:00:00.633) 0:02:32.837 ****** 2026-01-10 14:59:44.651299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-10 14:59:44.651306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:59:44.651313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:59:44.651320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:59:44.651329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:59:44.651335 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:44.651359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-10 14:59:44.651371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:59:44.651377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:59:44.651383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:59:44.651389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:59:44.651396 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:44.651421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-10 14:59:44.651431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:59:44.651443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:59:44.651450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:59:44.651457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:59:44.651464 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:44.651480 | orchestrator | 2026-01-10 14:59:44.651491 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-10 14:59:44.651498 | orchestrator | Saturday 10 January 2026 14:57:20 +0000 (0:00:01.044) 0:02:33.881 ****** 2026-01-10 14:59:44.651504 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:59:44.651511 | orchestrator | 2026-01-10 14:59:44.651516 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-01-10 14:59:44.651522 | orchestrator | Saturday 10 January 2026 14:57:21 +0000 (0:00:00.651) 0:02:34.533 ****** 2026-01-10 14:59:44.651532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:59:44.651560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:59:44.651568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:59:44.651575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:59:44.651581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:59:44.651588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:59:44.651600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:59:44.651611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:59:44.651616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:59:44.651623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:59:44.651630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:59:44.651636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:59:44.651643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:44.651660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:44.651667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:44.651673 | orchestrator | 2026-01-10 14:59:44.651680 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-01-10 14:59:44.651687 | orchestrator | Saturday 10 January 2026 14:57:26 +0000 (0:00:05.392) 0:02:39.925 ****** 2026-01-10 14:59:44.651694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-10 14:59:44.651701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:59:44.651706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:59:44.651712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:59:44.651733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:59:44.651741 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:44.651748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-10 14:59:44.651754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:59:44.651760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:59:44.651764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:59:44.651768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:59:44.651776 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:44.651785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-10 14:59:44.651789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:59:44.651793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:59:44.651797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:59:44.651801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:59:44.651805 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:44.651809 | orchestrator | 2026-01-10 14:59:44.651812 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-01-10 14:59:44.651819 | orchestrator | Saturday 10 January 2026 14:57:27 +0000 (0:00:00.909) 0:02:40.835 ****** 2026-01-10 14:59:44.651825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-10 14:59:44.651831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:59:44.651835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:59:44.651839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:59:44.651843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:59:44.651847 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:44.651851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-10 14:59:44.651858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:59:44.651867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:59:44.651871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:59:44.651875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:59:44.651879 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:44.651883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-10 14:59:44.651889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:59:44.651894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:59:44.651903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:59:44.651907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:59:44.651911 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:44.651915 | orchestrator | 2026-01-10 14:59:44.651919 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-01-10 14:59:44.651923 | orchestrator | Saturday 10 January 2026 14:57:28 +0000 (0:00:00.908) 0:02:41.743 ****** 2026-01-10 14:59:44.651926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:59:44.651931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:59:44.651937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:59:44.651947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:59:44.651951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:59:44.651955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:59:44.651959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:59:44.651963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:59:44.651970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:59:44.651974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:59:44.651982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:59:44.651986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:59:44.651990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:44.651994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:44.652000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:44.652004 | orchestrator | 2026-01-10 14:59:44.652008 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-01-10 14:59:44.652012 | orchestrator | Saturday 10 January 2026 14:57:33 +0000 (0:00:05.220) 0:02:46.964 ****** 2026-01-10 14:59:44.652028 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-10 14:59:44.652033 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-10 14:59:44.652037 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-10 14:59:44.652041 | orchestrator | 2026-01-10 14:59:44.652045 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-01-10 14:59:44.652048 | orchestrator | Saturday 10 January 2026 14:57:35 +0000 (0:00:02.105) 0:02:49.069 ****** 2026-01-10 14:59:44.652057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:59:44.652062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:59:44.652066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:59:44.652073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:59:44.652077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:59:44.652081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:59:44.652089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:59:44.652093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:59:44.652097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:59:44.652104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:59:44.652108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:59:44.652112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:59:44.652118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:44.652124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:44.652129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:44.652135 | orchestrator | 2026-01-10 14:59:44.652139 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-01-10 14:59:44.652143 | orchestrator | Saturday 10 January 2026 14:57:55 +0000 (0:00:19.431) 0:03:08.501 ****** 2026-01-10 14:59:44.652146 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:59:44.652150 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:59:44.652154 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:59:44.652158 | orchestrator | 2026-01-10 14:59:44.652162 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-01-10 14:59:44.652165 | orchestrator | Saturday 10 January 2026 14:57:56 +0000 (0:00:01.430) 0:03:09.932 ****** 2026-01-10 14:59:44.652169 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-10 14:59:44.652173 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-10 14:59:44.652177 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-10 14:59:44.652180 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-10 14:59:44.652184 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-10 14:59:44.652188 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-10 14:59:44.652192 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-10 14:59:44.652196 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-10 14:59:44.652199 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-10 14:59:44.652203 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-10 14:59:44.652207 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-10 14:59:44.652211 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-10 14:59:44.652214 | orchestrator | 2026-01-10 14:59:44.652218 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-01-10 14:59:44.652222 | orchestrator | Saturday 10 January 2026 14:58:01 +0000 (0:00:05.253) 0:03:15.185 ****** 2026-01-10 14:59:44.652226 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-10 14:59:44.652230 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-10 14:59:44.652233 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-10 14:59:44.652237 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-10 14:59:44.652241 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-10 14:59:44.652244 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-10 14:59:44.652248 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-10 14:59:44.652252 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-10 14:59:44.652256 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-10 14:59:44.652259 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-10 14:59:44.652263 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-10 14:59:44.652267 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-10 14:59:44.652271 | orchestrator | 2026-01-10 14:59:44.652274 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-01-10 14:59:44.652295 | orchestrator | Saturday 10 January 2026 14:58:08 +0000 (0:00:06.171) 0:03:21.357 ****** 2026-01-10 14:59:44.652299 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-10 14:59:44.652308 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-10 14:59:44.652316 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-10 14:59:44.652320 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-10 14:59:44.652328 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-10 14:59:44.652334 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-10 14:59:44.652345 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-10 14:59:44.652349 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-10 14:59:44.652355 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-10 14:59:44.652359 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-10 14:59:44.652363 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-10 14:59:44.652367 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-10 14:59:44.652371 | orchestrator | 2026-01-10 14:59:44.652375 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-01-10 14:59:44.652379 | orchestrator | Saturday 10 January 2026 14:58:15 +0000 (0:00:07.417) 0:03:28.774 ****** 2026-01-10 14:59:44.652383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:59:44.652387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:59:44.652391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:59:44.652395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:59:44.652406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:59:44.652413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:59:44.652420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:59:44.652426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:59:44.652430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:59:44.652434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:59:44.652438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:59:44.652451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:59:44.652455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:44.652459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:44.652463 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:59:44.652467 | orchestrator | 2026-01-10 14:59:44.652471 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-10 14:59:44.652475 | orchestrator | Saturday 10 January 2026 14:58:19 +0000 (0:00:03.778) 0:03:32.552 ****** 2026-01-10 14:59:44.652479 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:59:44.652483 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:59:44.652486 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:59:44.652490 | orchestrator | 2026-01-10 14:59:44.652494 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-01-10 14:59:44.652498 | orchestrator | Saturday 10 January 2026 14:58:19 +0000 (0:00:00.364) 0:03:32.917 ****** 2026-01-10 14:59:44.652501 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:59:44.652505 | orchestrator | 2026-01-10 14:59:44.652509 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-01-10 14:59:44.652513 | orchestrator | Saturday 10 January 2026 14:58:21 +0000 (0:00:02.010) 0:03:34.928 ****** 2026-01-10 14:59:44.652519 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:59:44.652522 | orchestrator | 2026-01-10 14:59:44.652526 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-01-10 14:59:44.652530 | orchestrator | Saturday 10 January 2026 14:58:24 +0000 (0:00:02.426) 0:03:37.354 ****** 2026-01-10 14:59:44.652534 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:59:44.652538 | orchestrator | 2026-01-10 14:59:44.652541 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-01-10 14:59:44.652545 | orchestrator | Saturday 10 January 2026 14:58:26 +0000 (0:00:02.683) 0:03:40.037 ****** 2026-01-10 14:59:44.652549 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:59:44.652553 | orchestrator | 2026-01-10 14:59:44.652556 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-01-10 14:59:44.652560 | orchestrator | Saturday 10 January 2026 14:58:29 +0000 (0:00:03.097) 0:03:43.135 ****** 2026-01-10 14:59:44.652564 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:59:44.652568 | orchestrator | 2026-01-10 14:59:44.652571 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-10 14:59:44.652575 | orchestrator | Saturday 10 January 2026 14:58:51 +0000 (0:00:21.263) 0:04:04.398 ****** 2026-01-10 14:59:44.652579 | orchestrator | 2026-01-10 14:59:44.652583 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-10 14:59:44.652586 | orchestrator | Saturday 10 January 2026 14:58:51 +0000 (0:00:00.064) 0:04:04.463 ****** 2026-01-10 14:59:44.652590 | orchestrator | 2026-01-10 14:59:44.652594 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-10 14:59:44.652599 | orchestrator | Saturday 10 January 2026 14:58:51 +0000 (0:00:00.073) 0:04:04.537 ****** 2026-01-10 14:59:44.652603 | orchestrator | 2026-01-10 14:59:44.652607 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-01-10 14:59:44.652612 | orchestrator | Saturday 10 January 2026 14:58:51 +0000 (0:00:00.071) 0:04:04.608 ****** 2026-01-10 14:59:44.652617 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:59:44.652620 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:59:44.652624 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:59:44.652628 | orchestrator | 2026-01-10 14:59:44.652632 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-01-10 14:59:44.652635 | orchestrator | Saturday 10 January 2026 14:59:06 +0000 (0:00:14.963) 0:04:19.572 ****** 2026-01-10 14:59:44.652639 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:59:44.652643 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:59:44.652647 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:59:44.652650 | orchestrator | 2026-01-10 14:59:44.652654 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-01-10 14:59:44.652658 | orchestrator | Saturday 10 January 2026 14:59:17 +0000 (0:00:10.895) 0:04:30.468 ****** 2026-01-10 14:59:44.652662 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:59:44.652666 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:59:44.652669 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:59:44.652673 | orchestrator | 2026-01-10 14:59:44.652677 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-01-10 14:59:44.652681 | orchestrator | Saturday 10 January 2026 14:59:27 +0000 (0:00:10.236) 0:04:40.704 ****** 2026-01-10 14:59:44.652684 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:59:44.652688 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:59:44.652692 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:59:44.652696 | orchestrator | 2026-01-10 14:59:44.652699 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-01-10 14:59:44.652703 | orchestrator | Saturday 10 January 2026 14:59:33 +0000 (0:00:05.891) 0:04:46.595 ****** 2026-01-10 14:59:44.652707 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:59:44.652711 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:59:44.652714 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:59:44.652718 | orchestrator | 2026-01-10 14:59:44.652722 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:59:44.652729 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:59:44.652733 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-10 14:59:44.652737 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-10 14:59:44.652741 | orchestrator | 2026-01-10 14:59:44.652744 | orchestrator | 2026-01-10 14:59:44.652748 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:59:44.652752 | orchestrator | Saturday 10 January 2026 14:59:43 +0000 (0:00:10.121) 0:04:56.717 ****** 2026-01-10 14:59:44.652756 | orchestrator | =============================================================================== 2026-01-10 14:59:44.652759 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 21.26s 2026-01-10 14:59:44.652763 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 19.43s 2026-01-10 14:59:44.652767 | orchestrator | octavia : Add rules for security groups -------------------------------- 18.00s 2026-01-10 14:59:44.652771 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.06s 2026-01-10 14:59:44.652774 | orchestrator | octavia : Restart octavia-api container -------------------------------- 14.96s 2026-01-10 14:59:44.652778 | orchestrator | octavia : Create security groups for octavia --------------------------- 11.10s 2026-01-10 14:59:44.652782 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 10.90s 2026-01-10 14:59:44.652785 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.24s 2026-01-10 14:59:44.652789 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.12s 2026-01-10 14:59:44.652793 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.67s 2026-01-10 14:59:44.652797 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 7.42s 2026-01-10 14:59:44.652800 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 6.62s 2026-01-10 14:59:44.652804 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.59s 2026-01-10 14:59:44.652808 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 6.25s 2026-01-10 14:59:44.652812 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 6.17s 2026-01-10 14:59:44.652816 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.06s 2026-01-10 14:59:44.652819 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 5.89s 2026-01-10 14:59:44.652823 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.85s 2026-01-10 14:59:44.652827 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.51s 2026-01-10 14:59:44.652831 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.39s 2026-01-10 14:59:47.694177 | orchestrator | 2026-01-10 14:59:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:59:50.737185 | orchestrator | 2026-01-10 14:59:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:59:53.787446 | orchestrator | 2026-01-10 14:59:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:59:56.830430 | orchestrator | 2026-01-10 14:59:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:59:59.871357 | orchestrator | 2026-01-10 14:59:59 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 15:00:02.912013 | orchestrator | 2026-01-10 15:00:02 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 15:00:05.959831 | orchestrator | 2026-01-10 15:00:05 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 15:00:09.015779 | orchestrator | 2026-01-10 15:00:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 15:00:12.063133 | orchestrator | 2026-01-10 15:00:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 15:00:15.105016 | orchestrator | 2026-01-10 15:00:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 15:00:18.152540 | orchestrator | 2026-01-10 15:00:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 15:00:21.195505 | orchestrator | 2026-01-10 15:00:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 15:00:24.230079 | orchestrator | 2026-01-10 15:00:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 15:00:27.273558 | orchestrator | 2026-01-10 15:00:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 15:00:30.311227 | orchestrator | 2026-01-10 15:00:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 15:00:33.361171 | orchestrator | 2026-01-10 15:00:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 15:00:36.404718 | orchestrator | 2026-01-10 15:00:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 15:00:39.449862 | orchestrator | 2026-01-10 15:00:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 15:00:42.497481 | orchestrator | 2026-01-10 15:00:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 15:00:45.543849 | orchestrator | 2026-01-10 15:00:45.892735 | orchestrator | 2026-01-10 15:00:45.898636 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sat Jan 10 15:00:45 UTC 2026 2026-01-10 15:00:45.898732 | orchestrator | 2026-01-10 15:00:46.267931 | orchestrator | ok: Runtime: 0:35:13.710177 2026-01-10 15:00:46.575289 | 2026-01-10 15:00:46.575481 | TASK [Bootstrap services] 2026-01-10 15:00:47.424741 | orchestrator | 2026-01-10 15:00:47.424871 | orchestrator | # BOOTSTRAP 2026-01-10 15:00:47.424881 | orchestrator | 2026-01-10 15:00:47.424887 | orchestrator | + set -e 2026-01-10 15:00:47.424892 | orchestrator | + echo 2026-01-10 15:00:47.424898 | orchestrator | + echo '# BOOTSTRAP' 2026-01-10 15:00:47.424907 | orchestrator | + echo 2026-01-10 15:00:47.424927 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-01-10 15:00:47.433981 | orchestrator | + set -e 2026-01-10 15:00:47.434175 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-01-10 15:00:52.435563 | orchestrator | 2026-01-10 15:00:52 | INFO  | It takes a moment until task 158216f5-00a2-4a98-82fb-5bee5ff96838 (flavor-manager) has been started and output is visible here. 2026-01-10 15:01:00.966931 | orchestrator | 2026-01-10 15:00:55 | INFO  | Flavor SCS-1L-1 created 2026-01-10 15:01:00.967054 | orchestrator | 2026-01-10 15:00:56 | INFO  | Flavor SCS-1L-1-5 created 2026-01-10 15:01:00.967068 | orchestrator | 2026-01-10 15:00:56 | INFO  | Flavor SCS-1V-2 created 2026-01-10 15:01:00.967076 | orchestrator | 2026-01-10 15:00:56 | INFO  | Flavor SCS-1V-2-5 created 2026-01-10 15:01:00.967083 | orchestrator | 2026-01-10 15:00:57 | INFO  | Flavor SCS-1V-4 created 2026-01-10 15:01:00.967090 | orchestrator | 2026-01-10 15:00:57 | INFO  | Flavor SCS-1V-4-10 created 2026-01-10 15:01:00.967097 | orchestrator | 2026-01-10 15:00:57 | INFO  | Flavor SCS-1V-8 created 2026-01-10 15:01:00.967106 | orchestrator | 2026-01-10 15:00:57 | INFO  | Flavor SCS-1V-8-20 created 2026-01-10 15:01:00.967125 | orchestrator | 2026-01-10 15:00:57 | INFO  | Flavor SCS-2V-4 created 2026-01-10 15:01:00.967132 | orchestrator | 2026-01-10 15:00:57 | INFO  | Flavor SCS-2V-4-10 created 2026-01-10 15:01:00.967139 | orchestrator | 2026-01-10 15:00:58 | INFO  | Flavor SCS-2V-8 created 2026-01-10 15:01:00.967147 | orchestrator | 2026-01-10 15:00:58 | INFO  | Flavor SCS-2V-8-20 created 2026-01-10 15:01:00.967153 | orchestrator | 2026-01-10 15:00:58 | INFO  | Flavor SCS-2V-16 created 2026-01-10 15:01:00.967160 | orchestrator | 2026-01-10 15:00:58 | INFO  | Flavor SCS-2V-16-50 created 2026-01-10 15:01:00.967167 | orchestrator | 2026-01-10 15:00:58 | INFO  | Flavor SCS-4V-8 created 2026-01-10 15:01:00.967174 | orchestrator | 2026-01-10 15:00:58 | INFO  | Flavor SCS-4V-8-20 created 2026-01-10 15:01:00.967181 | orchestrator | 2026-01-10 15:00:59 | INFO  | Flavor SCS-4V-16 created 2026-01-10 15:01:00.967188 | orchestrator | 2026-01-10 15:00:59 | INFO  | Flavor SCS-4V-16-50 created 2026-01-10 15:01:00.967225 | orchestrator | 2026-01-10 15:00:59 | INFO  | Flavor SCS-4V-32 created 2026-01-10 15:01:00.967232 | orchestrator | 2026-01-10 15:00:59 | INFO  | Flavor SCS-4V-32-100 created 2026-01-10 15:01:00.967238 | orchestrator | 2026-01-10 15:00:59 | INFO  | Flavor SCS-8V-16 created 2026-01-10 15:01:00.967244 | orchestrator | 2026-01-10 15:00:59 | INFO  | Flavor SCS-8V-16-50 created 2026-01-10 15:01:00.967250 | orchestrator | 2026-01-10 15:00:59 | INFO  | Flavor SCS-8V-32 created 2026-01-10 15:01:00.967256 | orchestrator | 2026-01-10 15:00:59 | INFO  | Flavor SCS-8V-32-100 created 2026-01-10 15:01:00.967262 | orchestrator | 2026-01-10 15:01:00 | INFO  | Flavor SCS-16V-32 created 2026-01-10 15:01:00.967269 | orchestrator | 2026-01-10 15:01:00 | INFO  | Flavor SCS-16V-32-100 created 2026-01-10 15:01:00.967275 | orchestrator | 2026-01-10 15:01:00 | INFO  | Flavor SCS-2V-4-20s created 2026-01-10 15:01:00.967281 | orchestrator | 2026-01-10 15:01:00 | INFO  | Flavor SCS-4V-8-50s created 2026-01-10 15:01:00.967287 | orchestrator | 2026-01-10 15:01:00 | INFO  | Flavor SCS-8V-32-100s created 2026-01-10 15:01:03.341241 | orchestrator | 2026-01-10 15:01:03 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-01-10 15:01:13.471156 | orchestrator | 2026-01-10 15:01:13 | INFO  | Task 7bb36a7e-3d7c-4d23-abf1-8c9b41829e55 (bootstrap-basic) was prepared for execution. 2026-01-10 15:01:13.471305 | orchestrator | 2026-01-10 15:01:13 | INFO  | It takes a moment until task 7bb36a7e-3d7c-4d23-abf1-8c9b41829e55 (bootstrap-basic) has been started and output is visible here. 2026-01-10 15:02:01.047537 | orchestrator | 2026-01-10 15:02:01.047640 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-01-10 15:02:01.047651 | orchestrator | 2026-01-10 15:02:01.047658 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-10 15:02:01.047666 | orchestrator | Saturday 10 January 2026 15:01:17 +0000 (0:00:00.078) 0:00:00.078 ****** 2026-01-10 15:02:01.047673 | orchestrator | ok: [localhost] 2026-01-10 15:02:01.047681 | orchestrator | 2026-01-10 15:02:01.047688 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-01-10 15:02:01.047694 | orchestrator | Saturday 10 January 2026 15:01:19 +0000 (0:00:01.933) 0:00:02.012 ****** 2026-01-10 15:02:01.047701 | orchestrator | ok: [localhost] 2026-01-10 15:02:01.047708 | orchestrator | 2026-01-10 15:02:01.047715 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-01-10 15:02:01.047722 | orchestrator | Saturday 10 January 2026 15:01:28 +0000 (0:00:08.594) 0:00:10.606 ****** 2026-01-10 15:02:01.047728 | orchestrator | changed: [localhost] 2026-01-10 15:02:01.047736 | orchestrator | 2026-01-10 15:02:01.047742 | orchestrator | TASK [Create public network] *************************************************** 2026-01-10 15:02:01.047749 | orchestrator | Saturday 10 January 2026 15:01:36 +0000 (0:00:08.088) 0:00:18.695 ****** 2026-01-10 15:02:01.047755 | orchestrator | changed: [localhost] 2026-01-10 15:02:01.047761 | orchestrator | 2026-01-10 15:02:01.047768 | orchestrator | TASK [Set public network to default] ******************************************* 2026-01-10 15:02:01.047775 | orchestrator | Saturday 10 January 2026 15:01:42 +0000 (0:00:05.686) 0:00:24.381 ****** 2026-01-10 15:02:01.047786 | orchestrator | changed: [localhost] 2026-01-10 15:02:01.047791 | orchestrator | 2026-01-10 15:02:01.047797 | orchestrator | TASK [Create public subnet] **************************************************** 2026-01-10 15:02:01.047803 | orchestrator | Saturday 10 January 2026 15:01:48 +0000 (0:00:06.492) 0:00:30.874 ****** 2026-01-10 15:02:01.047809 | orchestrator | changed: [localhost] 2026-01-10 15:02:01.047816 | orchestrator | 2026-01-10 15:02:01.047822 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-01-10 15:02:01.047828 | orchestrator | Saturday 10 January 2026 15:01:53 +0000 (0:00:04.525) 0:00:35.400 ****** 2026-01-10 15:02:01.047835 | orchestrator | changed: [localhost] 2026-01-10 15:02:01.047841 | orchestrator | 2026-01-10 15:02:01.047847 | orchestrator | TASK [Create manager role] ***************************************************** 2026-01-10 15:02:01.047865 | orchestrator | Saturday 10 January 2026 15:01:57 +0000 (0:00:03.800) 0:00:39.201 ****** 2026-01-10 15:02:01.047872 | orchestrator | ok: [localhost] 2026-01-10 15:02:01.047878 | orchestrator | 2026-01-10 15:02:01.047885 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 15:02:01.047892 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 15:02:01.047899 | orchestrator | 2026-01-10 15:02:01.047906 | orchestrator | 2026-01-10 15:02:01.047913 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 15:02:01.047920 | orchestrator | Saturday 10 January 2026 15:02:00 +0000 (0:00:03.773) 0:00:42.974 ****** 2026-01-10 15:02:01.047926 | orchestrator | =============================================================================== 2026-01-10 15:02:01.047934 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.59s 2026-01-10 15:02:01.047941 | orchestrator | Create volume type LUKS ------------------------------------------------- 8.09s 2026-01-10 15:02:01.047947 | orchestrator | Set public network to default ------------------------------------------- 6.49s 2026-01-10 15:02:01.047954 | orchestrator | Create public network --------------------------------------------------- 5.69s 2026-01-10 15:02:01.047981 | orchestrator | Create public subnet ---------------------------------------------------- 4.53s 2026-01-10 15:02:01.047989 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.80s 2026-01-10 15:02:01.047997 | orchestrator | Create manager role ----------------------------------------------------- 3.77s 2026-01-10 15:02:01.048004 | orchestrator | Gathering Facts --------------------------------------------------------- 1.93s 2026-01-10 15:02:03.451174 | orchestrator | 2026-01-10 15:02:03 | INFO  | It takes a moment until task be648944-ec43-4229-aeb9-8e3151f9d6eb (image-manager) has been started and output is visible here. 2026-01-10 15:02:45.361508 | orchestrator | 2026-01-10 15:02:06 | INFO  | Processing image 'Cirros 0.6.2' 2026-01-10 15:02:45.361714 | orchestrator | 2026-01-10 15:02:06 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-01-10 15:02:45.361727 | orchestrator | 2026-01-10 15:02:06 | INFO  | Importing image Cirros 0.6.2 2026-01-10 15:02:45.361732 | orchestrator | 2026-01-10 15:02:06 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-01-10 15:02:45.361738 | orchestrator | 2026-01-10 15:02:08 | INFO  | Waiting for image to leave queued state... 2026-01-10 15:02:45.361743 | orchestrator | 2026-01-10 15:02:10 | INFO  | Waiting for import to complete... 2026-01-10 15:02:45.361747 | orchestrator | 2026-01-10 15:02:21 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-01-10 15:02:45.361753 | orchestrator | 2026-01-10 15:02:21 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-01-10 15:02:45.361757 | orchestrator | 2026-01-10 15:02:21 | INFO  | Setting internal_version = 0.6.2 2026-01-10 15:02:45.361768 | orchestrator | 2026-01-10 15:02:21 | INFO  | Setting image_original_user = cirros 2026-01-10 15:02:45.361785 | orchestrator | 2026-01-10 15:02:21 | INFO  | Adding tag os:cirros 2026-01-10 15:02:45.361789 | orchestrator | 2026-01-10 15:02:21 | INFO  | Setting property architecture: x86_64 2026-01-10 15:02:45.361793 | orchestrator | 2026-01-10 15:02:21 | INFO  | Setting property hw_disk_bus: scsi 2026-01-10 15:02:45.361797 | orchestrator | 2026-01-10 15:02:22 | INFO  | Setting property hw_rng_model: virtio 2026-01-10 15:02:45.361801 | orchestrator | 2026-01-10 15:02:22 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-01-10 15:02:45.361805 | orchestrator | 2026-01-10 15:02:22 | INFO  | Setting property hw_watchdog_action: reset 2026-01-10 15:02:45.361809 | orchestrator | 2026-01-10 15:02:22 | INFO  | Setting property hypervisor_type: qemu 2026-01-10 15:02:45.361813 | orchestrator | 2026-01-10 15:02:23 | INFO  | Setting property os_distro: cirros 2026-01-10 15:02:45.361816 | orchestrator | 2026-01-10 15:02:23 | INFO  | Setting property os_purpose: minimal 2026-01-10 15:02:45.361820 | orchestrator | 2026-01-10 15:02:23 | INFO  | Setting property replace_frequency: never 2026-01-10 15:02:45.361824 | orchestrator | 2026-01-10 15:02:23 | INFO  | Setting property uuid_validity: none 2026-01-10 15:02:45.361828 | orchestrator | 2026-01-10 15:02:23 | INFO  | Setting property provided_until: none 2026-01-10 15:02:45.361832 | orchestrator | 2026-01-10 15:02:24 | INFO  | Setting property image_description: Cirros 2026-01-10 15:02:45.361835 | orchestrator | 2026-01-10 15:02:24 | INFO  | Setting property image_name: Cirros 2026-01-10 15:02:45.361839 | orchestrator | 2026-01-10 15:02:24 | INFO  | Setting property internal_version: 0.6.2 2026-01-10 15:02:45.361843 | orchestrator | 2026-01-10 15:02:24 | INFO  | Setting property image_original_user: cirros 2026-01-10 15:02:45.361880 | orchestrator | 2026-01-10 15:02:24 | INFO  | Setting property os_version: 0.6.2 2026-01-10 15:02:45.361896 | orchestrator | 2026-01-10 15:02:25 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-01-10 15:02:45.361902 | orchestrator | 2026-01-10 15:02:25 | INFO  | Setting property image_build_date: 2023-05-30 2026-01-10 15:02:45.361906 | orchestrator | 2026-01-10 15:02:25 | INFO  | Checking status of 'Cirros 0.6.2' 2026-01-10 15:02:45.361909 | orchestrator | 2026-01-10 15:02:25 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-01-10 15:02:45.361913 | orchestrator | 2026-01-10 15:02:25 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-01-10 15:02:45.361917 | orchestrator | 2026-01-10 15:02:25 | INFO  | Processing image 'Cirros 0.6.3' 2026-01-10 15:02:45.361924 | orchestrator | 2026-01-10 15:02:26 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-01-10 15:02:45.361928 | orchestrator | 2026-01-10 15:02:26 | INFO  | Importing image Cirros 0.6.3 2026-01-10 15:02:45.361931 | orchestrator | 2026-01-10 15:02:26 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-01-10 15:02:45.361935 | orchestrator | 2026-01-10 15:02:27 | INFO  | Waiting for image to leave queued state... 2026-01-10 15:02:45.361941 | orchestrator | 2026-01-10 15:02:29 | INFO  | Waiting for import to complete... 2026-01-10 15:02:45.361979 | orchestrator | 2026-01-10 15:02:40 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-01-10 15:02:45.361986 | orchestrator | 2026-01-10 15:02:40 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-01-10 15:02:45.361992 | orchestrator | 2026-01-10 15:02:40 | INFO  | Setting internal_version = 0.6.3 2026-01-10 15:02:45.361997 | orchestrator | 2026-01-10 15:02:40 | INFO  | Setting image_original_user = cirros 2026-01-10 15:02:45.362003 | orchestrator | 2026-01-10 15:02:40 | INFO  | Adding tag os:cirros 2026-01-10 15:02:45.362010 | orchestrator | 2026-01-10 15:02:40 | INFO  | Setting property architecture: x86_64 2026-01-10 15:02:45.362091 | orchestrator | 2026-01-10 15:02:41 | INFO  | Setting property hw_disk_bus: scsi 2026-01-10 15:02:45.362098 | orchestrator | 2026-01-10 15:02:41 | INFO  | Setting property hw_rng_model: virtio 2026-01-10 15:02:45.362105 | orchestrator | 2026-01-10 15:02:41 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-01-10 15:02:45.362111 | orchestrator | 2026-01-10 15:02:41 | INFO  | Setting property hw_watchdog_action: reset 2026-01-10 15:02:45.362116 | orchestrator | 2026-01-10 15:02:41 | INFO  | Setting property hypervisor_type: qemu 2026-01-10 15:02:45.362122 | orchestrator | 2026-01-10 15:02:42 | INFO  | Setting property os_distro: cirros 2026-01-10 15:02:45.362129 | orchestrator | 2026-01-10 15:02:42 | INFO  | Setting property os_purpose: minimal 2026-01-10 15:02:45.362135 | orchestrator | 2026-01-10 15:02:42 | INFO  | Setting property replace_frequency: never 2026-01-10 15:02:45.362141 | orchestrator | 2026-01-10 15:02:42 | INFO  | Setting property uuid_validity: none 2026-01-10 15:02:45.362146 | orchestrator | 2026-01-10 15:02:42 | INFO  | Setting property provided_until: none 2026-01-10 15:02:45.362152 | orchestrator | 2026-01-10 15:02:43 | INFO  | Setting property image_description: Cirros 2026-01-10 15:02:45.362158 | orchestrator | 2026-01-10 15:02:43 | INFO  | Setting property image_name: Cirros 2026-01-10 15:02:45.362164 | orchestrator | 2026-01-10 15:02:43 | INFO  | Setting property internal_version: 0.6.3 2026-01-10 15:02:45.362188 | orchestrator | 2026-01-10 15:02:43 | INFO  | Setting property image_original_user: cirros 2026-01-10 15:02:45.362194 | orchestrator | 2026-01-10 15:02:43 | INFO  | Setting property os_version: 0.6.3 2026-01-10 15:02:45.362201 | orchestrator | 2026-01-10 15:02:44 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-01-10 15:02:45.362208 | orchestrator | 2026-01-10 15:02:44 | INFO  | Setting property image_build_date: 2024-09-26 2026-01-10 15:02:45.362213 | orchestrator | 2026-01-10 15:02:44 | INFO  | Checking status of 'Cirros 0.6.3' 2026-01-10 15:02:45.362219 | orchestrator | 2026-01-10 15:02:44 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-01-10 15:02:45.362225 | orchestrator | 2026-01-10 15:02:44 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-01-10 15:02:45.746211 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-01-10 15:02:48.142614 | orchestrator | 2026-01-10 15:02:48 | INFO  | date: 2026-01-10 2026-01-10 15:02:48.142701 | orchestrator | 2026-01-10 15:02:48 | INFO  | image: octavia-amphora-haproxy-2024.2.20260110.qcow2 2026-01-10 15:02:48.142733 | orchestrator | 2026-01-10 15:02:48 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260110.qcow2 2026-01-10 15:02:48.142744 | orchestrator | 2026-01-10 15:02:48 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260110.qcow2.CHECKSUM 2026-01-10 15:02:48.299069 | orchestrator | 2026-01-10 15:02:48 | INFO  | checksum: ae42c33b510a5d6430e8d5e850fcf0e0166b59a495061a775b2e6eb290d4c686 2026-01-10 15:02:48.379261 | orchestrator | 2026-01-10 15:02:48 | INFO  | It takes a moment until task ac42c5d4-5918-49fd-b0cc-36ce328ec7b3 (image-manager) has been started and output is visible here. 2026-01-10 15:03:59.761236 | orchestrator | 2026-01-10 15:02:50 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-01-10' 2026-01-10 15:03:59.761342 | orchestrator | 2026-01-10 15:02:50 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260110.qcow2: 200 2026-01-10 15:03:59.761356 | orchestrator | 2026-01-10 15:02:50 | INFO  | Importing image OpenStack Octavia Amphora 2026-01-10 2026-01-10 15:03:59.761361 | orchestrator | 2026-01-10 15:02:50 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260110.qcow2 2026-01-10 15:03:59.761366 | orchestrator | 2026-01-10 15:02:52 | INFO  | Waiting for image to leave queued state... 2026-01-10 15:03:59.761371 | orchestrator | 2026-01-10 15:02:54 | INFO  | Waiting for import to complete... 2026-01-10 15:03:59.761376 | orchestrator | 2026-01-10 15:03:04 | INFO  | Waiting for import to complete... 2026-01-10 15:03:59.761379 | orchestrator | 2026-01-10 15:03:14 | INFO  | Waiting for import to complete... 2026-01-10 15:03:59.761383 | orchestrator | 2026-01-10 15:03:24 | INFO  | Waiting for import to complete... 2026-01-10 15:03:59.761389 | orchestrator | 2026-01-10 15:03:34 | INFO  | Waiting for import to complete... 2026-01-10 15:03:59.761393 | orchestrator | 2026-01-10 15:03:44 | INFO  | Waiting for import to complete... 2026-01-10 15:03:59.761397 | orchestrator | 2026-01-10 15:03:54 | INFO  | Import of 'OpenStack Octavia Amphora 2026-01-10' successfully completed, reloading images 2026-01-10 15:03:59.761402 | orchestrator | 2026-01-10 15:03:55 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-01-10' 2026-01-10 15:03:59.761425 | orchestrator | 2026-01-10 15:03:55 | INFO  | Setting internal_version = 2026-01-10 2026-01-10 15:03:59.761429 | orchestrator | 2026-01-10 15:03:55 | INFO  | Setting image_original_user = ubuntu 2026-01-10 15:03:59.761435 | orchestrator | 2026-01-10 15:03:55 | INFO  | Adding tag amphora 2026-01-10 15:03:59.761441 | orchestrator | 2026-01-10 15:03:55 | INFO  | Adding tag os:ubuntu 2026-01-10 15:03:59.761451 | orchestrator | 2026-01-10 15:03:55 | INFO  | Setting property architecture: x86_64 2026-01-10 15:03:59.761458 | orchestrator | 2026-01-10 15:03:55 | INFO  | Setting property hw_disk_bus: scsi 2026-01-10 15:03:59.761465 | orchestrator | 2026-01-10 15:03:55 | INFO  | Setting property hw_rng_model: virtio 2026-01-10 15:03:59.761471 | orchestrator | 2026-01-10 15:03:56 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-01-10 15:03:59.761477 | orchestrator | 2026-01-10 15:03:56 | INFO  | Setting property hw_watchdog_action: reset 2026-01-10 15:03:59.761482 | orchestrator | 2026-01-10 15:03:56 | INFO  | Setting property hypervisor_type: qemu 2026-01-10 15:03:59.761488 | orchestrator | 2026-01-10 15:03:56 | INFO  | Setting property os_distro: ubuntu 2026-01-10 15:03:59.761494 | orchestrator | 2026-01-10 15:03:57 | INFO  | Setting property replace_frequency: quarterly 2026-01-10 15:03:59.761500 | orchestrator | 2026-01-10 15:03:57 | INFO  | Setting property uuid_validity: last-1 2026-01-10 15:03:59.761506 | orchestrator | 2026-01-10 15:03:57 | INFO  | Setting property provided_until: none 2026-01-10 15:03:59.761512 | orchestrator | 2026-01-10 15:03:57 | INFO  | Setting property os_purpose: network 2026-01-10 15:03:59.761534 | orchestrator | 2026-01-10 15:03:57 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-01-10 15:03:59.761541 | orchestrator | 2026-01-10 15:03:58 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-01-10 15:03:59.761548 | orchestrator | 2026-01-10 15:03:58 | INFO  | Setting property internal_version: 2026-01-10 2026-01-10 15:03:59.761554 | orchestrator | 2026-01-10 15:03:58 | INFO  | Setting property image_original_user: ubuntu 2026-01-10 15:03:59.761560 | orchestrator | 2026-01-10 15:03:58 | INFO  | Setting property os_version: 2026-01-10 2026-01-10 15:03:59.761567 | orchestrator | 2026-01-10 15:03:58 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260110.qcow2 2026-01-10 15:03:59.761574 | orchestrator | 2026-01-10 15:03:59 | INFO  | Setting property image_build_date: 2026-01-10 2026-01-10 15:03:59.761580 | orchestrator | 2026-01-10 15:03:59 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-01-10' 2026-01-10 15:03:59.761585 | orchestrator | 2026-01-10 15:03:59 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-01-10' 2026-01-10 15:03:59.761682 | orchestrator | 2026-01-10 15:03:59 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-01-10 15:03:59.761694 | orchestrator | 2026-01-10 15:03:59 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-01-10 15:03:59.761699 | orchestrator | 2026-01-10 15:03:59 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-01-10 15:03:59.761703 | orchestrator | 2026-01-10 15:03:59 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-01-10 15:04:00.246422 | orchestrator | ok: Runtime: 0:03:13.047066 2026-01-10 15:04:00.268961 | 2026-01-10 15:04:00.269092 | TASK [Run checks] 2026-01-10 15:04:01.025323 | orchestrator | + set -e 2026-01-10 15:04:01.025493 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-10 15:04:01.025504 | orchestrator | ++ export INTERACTIVE=false 2026-01-10 15:04:01.025512 | orchestrator | ++ INTERACTIVE=false 2026-01-10 15:04:01.025518 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-10 15:04:01.025522 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-10 15:04:01.025528 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-01-10 15:04:01.026642 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-01-10 15:04:01.033368 | orchestrator | 2026-01-10 15:04:01.033474 | orchestrator | # CHECK 2026-01-10 15:04:01.033485 | orchestrator | 2026-01-10 15:04:01.033492 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-10 15:04:01.033501 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-10 15:04:01.033506 | orchestrator | + echo 2026-01-10 15:04:01.033510 | orchestrator | + echo '# CHECK' 2026-01-10 15:04:01.033514 | orchestrator | + echo 2026-01-10 15:04:01.033522 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-01-10 15:04:01.034445 | orchestrator | ++ semver 9.5.0 5.0.0 2026-01-10 15:04:01.095044 | orchestrator | 2026-01-10 15:04:01.095130 | orchestrator | ## Containers @ testbed-manager 2026-01-10 15:04:01.095143 | orchestrator | 2026-01-10 15:04:01.095152 | orchestrator | + [[ 1 -eq -1 ]] 2026-01-10 15:04:01.095157 | orchestrator | + echo 2026-01-10 15:04:01.095162 | orchestrator | + echo '## Containers @ testbed-manager' 2026-01-10 15:04:01.095167 | orchestrator | + echo 2026-01-10 15:04:01.095171 | orchestrator | + osism container testbed-manager ps 2026-01-10 15:04:03.109871 | orchestrator | 2026-01-10 15:04:03 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-01-10 15:04:03.488955 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-01-10 15:04:03.489093 | orchestrator | 719aecd1d002 registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_blackbox_exporter 2026-01-10 15:04:03.489106 | orchestrator | d1f1657c50df registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_alertmanager 2026-01-10 15:04:03.489111 | orchestrator | 562d0758fbb4 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2026-01-10 15:04:03.489120 | orchestrator | 8219b31c4d95 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2026-01-10 15:04:03.489124 | orchestrator | e14241ee70d9 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_server 2026-01-10 15:04:03.489131 | orchestrator | 5c474bf25d04 registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 18 minutes ago Up 17 minutes cephclient 2026-01-10 15:04:03.489136 | orchestrator | 09f0ebd9cffc registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2026-01-10 15:04:03.489140 | orchestrator | 6a419b765260 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2026-01-10 15:04:03.489162 | orchestrator | 461f0eb922f0 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2026-01-10 15:04:03.489167 | orchestrator | 98fdf8b22a4d phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 32 minutes ago Up 31 minutes (healthy) 80/tcp phpmyadmin 2026-01-10 15:04:03.489171 | orchestrator | cb152319236d registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 32 minutes ago Up 32 minutes openstackclient 2026-01-10 15:04:03.489175 | orchestrator | 33948816ed67 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 32 minutes ago Up 32 minutes (healthy) 8080/tcp homer 2026-01-10 15:04:03.489179 | orchestrator | c405787306a4 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 56 minutes ago Up 56 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2026-01-10 15:04:03.489186 | orchestrator | 20def61a4e2a registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" About an hour ago Up 39 minutes (healthy) manager-inventory_reconciler-1 2026-01-10 15:04:03.489202 | orchestrator | e1a4d17d08c6 registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" About an hour ago Up 39 minutes (healthy) ceph-ansible 2026-01-10 15:04:03.489206 | orchestrator | 0a82cc0f7500 registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" About an hour ago Up 39 minutes (healthy) osism-kubernetes 2026-01-10 15:04:03.489210 | orchestrator | 51c8e8e5049c registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" About an hour ago Up 39 minutes (healthy) osism-ansible 2026-01-10 15:04:03.489214 | orchestrator | bdc07d8de5f8 registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" About an hour ago Up 39 minutes (healthy) kolla-ansible 2026-01-10 15:04:03.489218 | orchestrator | e828e3bb8636 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" About an hour ago Up 39 minutes (healthy) 8000/tcp manager-ara-server-1 2026-01-10 15:04:03.489222 | orchestrator | 1c4f83c42c4d registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" About an hour ago Up 40 minutes (healthy) manager-listener-1 2026-01-10 15:04:03.489226 | orchestrator | 0e9dae333438 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" About an hour ago Up 40 minutes (healthy) osismclient 2026-01-10 15:04:03.489230 | orchestrator | dfe708966bba registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" About an hour ago Up 40 minutes (healthy) manager-beat-1 2026-01-10 15:04:03.489238 | orchestrator | 4218cf003419 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" About an hour ago Up 40 minutes (healthy) 6379/tcp manager-redis-1 2026-01-10 15:04:03.489242 | orchestrator | 185f3aaa6177 registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" About an hour ago Up 40 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2026-01-10 15:04:03.489246 | orchestrator | 42d29d9816b1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" About an hour ago Up 40 minutes (healthy) manager-openstack-1 2026-01-10 15:04:03.489249 | orchestrator | d1d9151f10f6 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" About an hour ago Up 40 minutes (healthy) 3306/tcp manager-mariadb-1 2026-01-10 15:04:03.489253 | orchestrator | 3a3bc4255054 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" About an hour ago Up 40 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-01-10 15:04:03.489260 | orchestrator | c209c3e9477d registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" About an hour ago Up 40 minutes (healthy) manager-flower-1 2026-01-10 15:04:03.489264 | orchestrator | b78a82f71f9f registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" About an hour ago Up About an hour (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-01-10 15:04:03.804855 | orchestrator | 2026-01-10 15:04:03.804944 | orchestrator | ## Images @ testbed-manager 2026-01-10 15:04:03.804952 | orchestrator | 2026-01-10 15:04:03.804956 | orchestrator | + echo 2026-01-10 15:04:03.804961 | orchestrator | + echo '## Images @ testbed-manager' 2026-01-10 15:04:03.804966 | orchestrator | + echo 2026-01-10 15:04:03.804970 | orchestrator | + osism container testbed-manager images 2026-01-10 15:04:06.161176 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-01-10 15:04:06.161268 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 bed886ed5921 12 hours ago 238MB 2026-01-10 15:04:06.161277 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 5 weeks ago 11.5MB 2026-01-10 15:04:06.161284 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 5 weeks ago 608MB 2026-01-10 15:04:06.161291 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 5 weeks ago 669MB 2026-01-10 15:04:06.161300 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 5 weeks ago 265MB 2026-01-10 15:04:06.161307 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 5 weeks ago 578MB 2026-01-10 15:04:06.161313 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 5 weeks ago 308MB 2026-01-10 15:04:06.161321 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 5 weeks ago 357MB 2026-01-10 15:04:06.161327 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 5 weeks ago 404MB 2026-01-10 15:04:06.161353 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 5 weeks ago 839MB 2026-01-10 15:04:06.161360 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 5 weeks ago 305MB 2026-01-10 15:04:06.161365 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 5 weeks ago 330MB 2026-01-10 15:04:06.161371 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 5 weeks ago 613MB 2026-01-10 15:04:06.161377 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 5 weeks ago 560MB 2026-01-10 15:04:06.161383 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 5 weeks ago 1.23GB 2026-01-10 15:04:06.161389 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 5 weeks ago 383MB 2026-01-10 15:04:06.161395 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 5 weeks ago 238MB 2026-01-10 15:04:06.161401 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 8 weeks ago 334MB 2026-01-10 15:04:06.161407 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine 13105d2858de 2 months ago 41.4MB 2026-01-10 15:04:06.161413 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 3 months ago 742MB 2026-01-10 15:04:06.161418 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 4 months ago 275MB 2026-01-10 15:04:06.161425 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 5 months ago 226MB 2026-01-10 15:04:06.161430 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 8 months ago 453MB 2026-01-10 15:04:06.161436 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 19 months ago 146MB 2026-01-10 15:04:06.491541 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-01-10 15:04:06.492532 | orchestrator | ++ semver 9.5.0 5.0.0 2026-01-10 15:04:06.553105 | orchestrator | 2026-01-10 15:04:06.553233 | orchestrator | ## Containers @ testbed-node-0 2026-01-10 15:04:06.553243 | orchestrator | 2026-01-10 15:04:06.553249 | orchestrator | + [[ 1 -eq -1 ]] 2026-01-10 15:04:06.553254 | orchestrator | + echo 2026-01-10 15:04:06.553261 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-01-10 15:04:06.553267 | orchestrator | + echo 2026-01-10 15:04:06.553273 | orchestrator | + osism container testbed-node-0 ps 2026-01-10 15:04:08.987925 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-01-10 15:04:08.988044 | orchestrator | d7c34fa34405 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-01-10 15:04:08.988061 | orchestrator | 4638f64cc6a4 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-01-10 15:04:08.988069 | orchestrator | 32c002445689 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-01-10 15:04:08.988075 | orchestrator | 7700371fa170 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 5 minutes ago Up 4 minutes octavia_driver_agent 2026-01-10 15:04:08.988082 | orchestrator | ceaa443fa52f registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2026-01-10 15:04:08.988107 | orchestrator | abbc29c34bb6 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2026-01-10 15:04:08.988114 | orchestrator | 60dc3077d77c registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_conductor 2026-01-10 15:04:08.988121 | orchestrator | acf317b467ba registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_api 2026-01-10 15:04:08.988128 | orchestrator | 21ee8b9d060d registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-01-10 15:04:08.988135 | orchestrator | c597dd0d60c4 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes grafana 2026-01-10 15:04:08.988141 | orchestrator | fc9565afd9d2 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_backup 2026-01-10 15:04:08.988147 | orchestrator | 9cf3f27d79bd registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2026-01-10 15:04:08.988154 | orchestrator | 9dbf25c4b17c registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_volume 2026-01-10 15:04:08.988160 | orchestrator | f1a39b57e5c5 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2026-01-10 15:04:08.988166 | orchestrator | 64158a83fc5c registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_elasticsearch_exporter 2026-01-10 15:04:08.988182 | orchestrator | e5bf2b70430a registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2026-01-10 15:04:08.988188 | orchestrator | 38acbbef2f07 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2026-01-10 15:04:08.988194 | orchestrator | ad2e590e9b5a registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_memcached_exporter 2026-01-10 15:04:08.988201 | orchestrator | e6dbec0648fd registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_mysqld_exporter 2026-01-10 15:04:08.988226 | orchestrator | 9e3deb2ba89d registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2026-01-10 15:04:08.988233 | orchestrator | 7ebc13420ac1 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_conductor 2026-01-10 15:04:08.988238 | orchestrator | 0276cadb3d9f registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_api 2026-01-10 15:04:08.988244 | orchestrator | d70f1e8598fb registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2026-01-10 15:04:08.988256 | orchestrator | 09702168bb08 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) placement_api 2026-01-10 15:04:08.988261 | orchestrator | 5f0a46f4793a registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_worker 2026-01-10 15:04:08.988268 | orchestrator | 3dfaebdf13ae registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2026-01-10 15:04:08.988274 | orchestrator | 9dcccc79a163 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2026-01-10 15:04:08.988284 | orchestrator | 0bc227cd5b38 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2026-01-10 15:04:08.988291 | orchestrator | df724477a14d registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2026-01-10 15:04:08.988296 | orchestrator | 3c60103aa7f8 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_backend_bind9 2026-01-10 15:04:08.988302 | orchestrator | ce194cc85541 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_worker 2026-01-10 15:04:08.988308 | orchestrator | 80d31850176d registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_keystone_listener 2026-01-10 15:04:08.988313 | orchestrator | 87b91d9e1188 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2026-01-10 15:04:08.988320 | orchestrator | 95397d80bcf4 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-0 2026-01-10 15:04:08.988326 | orchestrator | aa8fea9a0ce4 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2026-01-10 15:04:08.988332 | orchestrator | 0dad4b926bce registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2026-01-10 15:04:08.988338 | orchestrator | ce51f7f16a1d registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2026-01-10 15:04:08.988344 | orchestrator | 4e7c5c6dc355 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2026-01-10 15:04:08.988351 | orchestrator | 6586b75e1832 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2026-01-10 15:04:08.988357 | orchestrator | 508af307a446 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2026-01-10 15:04:08.988371 | orchestrator | f536da7de44a registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2026-01-10 15:04:08.988378 | orchestrator | ca6dd3d58483 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-0 2026-01-10 15:04:08.988393 | orchestrator | ed2ab09f3303 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 24 minutes ago Up 23 minutes keepalived 2026-01-10 15:04:08.988400 | orchestrator | bd53c29c2232 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2026-01-10 15:04:08.988406 | orchestrator | af1553acd4fa registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2026-01-10 15:04:08.988411 | orchestrator | bfc33a864ed0 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2026-01-10 15:04:08.988417 | orchestrator | ab9bb3d4202c registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db 2026-01-10 15:04:08.988423 | orchestrator | b7313486a0ea registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_nb_db 2026-01-10 15:04:08.988428 | orchestrator | f1f2149b4807 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2026-01-10 15:04:08.988434 | orchestrator | ec0e548c7d9b registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-0 2026-01-10 15:04:08.988439 | orchestrator | 90f9021876ce registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) rabbitmq 2026-01-10 15:04:08.988446 | orchestrator | 23286ec7bdbd registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2026-01-10 15:04:08.988451 | orchestrator | a41c1adb28c9 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2026-01-10 15:04:08.988457 | orchestrator | f45755523246 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2026-01-10 15:04:08.988463 | orchestrator | 63f4fbb20f83 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2026-01-10 15:04:08.988470 | orchestrator | 4a702fdaf81b registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2026-01-10 15:04:08.988480 | orchestrator | 6ee530ef1604 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2026-01-10 15:04:08.988486 | orchestrator | c70fe1a5e408 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2026-01-10 15:04:08.988492 | orchestrator | 68d3489ce070 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2026-01-10 15:04:09.348257 | orchestrator | 2026-01-10 15:04:09.348346 | orchestrator | ## Images @ testbed-node-0 2026-01-10 15:04:09.348356 | orchestrator | 2026-01-10 15:04:09.348364 | orchestrator | + echo 2026-01-10 15:04:09.348397 | orchestrator | + echo '## Images @ testbed-node-0' 2026-01-10 15:04:09.348406 | orchestrator | + echo 2026-01-10 15:04:09.348413 | orchestrator | + osism container testbed-node-0 images 2026-01-10 15:04:11.700650 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-01-10 15:04:11.700741 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 5 weeks ago 322MB 2026-01-10 15:04:11.700747 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 5 weeks ago 266MB 2026-01-10 15:04:11.700752 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 5 weeks ago 1.56GB 2026-01-10 15:04:11.700756 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 5 weeks ago 276MB 2026-01-10 15:04:11.700760 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 5 weeks ago 1.53GB 2026-01-10 15:04:11.700764 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 5 weeks ago 669MB 2026-01-10 15:04:11.700767 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 5 weeks ago 265MB 2026-01-10 15:04:11.700771 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 5 weeks ago 1.02GB 2026-01-10 15:04:11.700776 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 5 weeks ago 412MB 2026-01-10 15:04:11.700779 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 5 weeks ago 274MB 2026-01-10 15:04:11.700783 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 5 weeks ago 578MB 2026-01-10 15:04:11.700787 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 5 weeks ago 273MB 2026-01-10 15:04:11.700791 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 5 weeks ago 273MB 2026-01-10 15:04:11.700795 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 5 weeks ago 452MB 2026-01-10 15:04:11.700799 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 5 weeks ago 1.15GB 2026-01-10 15:04:11.700803 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 5 weeks ago 301MB 2026-01-10 15:04:11.700806 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 5 weeks ago 298MB 2026-01-10 15:04:11.700810 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 5 weeks ago 357MB 2026-01-10 15:04:11.700830 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 5 weeks ago 292MB 2026-01-10 15:04:11.700837 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 5 weeks ago 305MB 2026-01-10 15:04:11.700846 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 5 weeks ago 279MB 2026-01-10 15:04:11.700854 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 5 weeks ago 279MB 2026-01-10 15:04:11.700859 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 5 weeks ago 975MB 2026-01-10 15:04:11.700864 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 5 weeks ago 1.37GB 2026-01-10 15:04:11.700870 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 5 weeks ago 1.21GB 2026-01-10 15:04:11.700896 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 5 weeks ago 1.21GB 2026-01-10 15:04:11.700902 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 5 weeks ago 1.21GB 2026-01-10 15:04:11.700908 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 5 weeks ago 976MB 2026-01-10 15:04:11.700914 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 5 weeks ago 976MB 2026-01-10 15:04:11.700919 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 5 weeks ago 1.13GB 2026-01-10 15:04:11.700925 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 5 weeks ago 1.24GB 2026-01-10 15:04:11.700946 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 5 weeks ago 974MB 2026-01-10 15:04:11.700953 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 5 weeks ago 974MB 2026-01-10 15:04:11.700960 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 5 weeks ago 974MB 2026-01-10 15:04:11.700965 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 5 weeks ago 973MB 2026-01-10 15:04:11.700972 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 5 weeks ago 991MB 2026-01-10 15:04:11.700977 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 5 weeks ago 991MB 2026-01-10 15:04:11.700984 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 5 weeks ago 990MB 2026-01-10 15:04:11.700990 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 5 weeks ago 1.09GB 2026-01-10 15:04:11.700996 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 5 weeks ago 1.04GB 2026-01-10 15:04:11.701002 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 5 weeks ago 1.04GB 2026-01-10 15:04:11.701008 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 5 weeks ago 1.03GB 2026-01-10 15:04:11.701014 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 5 weeks ago 1.03GB 2026-01-10 15:04:11.701021 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 5 weeks ago 1.05GB 2026-01-10 15:04:11.701027 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 5 weeks ago 1.03GB 2026-01-10 15:04:11.701033 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 5 weeks ago 1.05GB 2026-01-10 15:04:11.701039 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 5 weeks ago 1.16GB 2026-01-10 15:04:11.701046 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 5 weeks ago 1.1GB 2026-01-10 15:04:11.701051 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 5 weeks ago 983MB 2026-01-10 15:04:11.701058 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 5 weeks ago 989MB 2026-01-10 15:04:11.701064 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 5 weeks ago 984MB 2026-01-10 15:04:11.701077 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 5 weeks ago 984MB 2026-01-10 15:04:11.701083 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 5 weeks ago 989MB 2026-01-10 15:04:11.701088 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 5 weeks ago 984MB 2026-01-10 15:04:11.701096 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 5 weeks ago 1.05GB 2026-01-10 15:04:11.701100 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 5 weeks ago 990MB 2026-01-10 15:04:11.701103 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 5 weeks ago 1.72GB 2026-01-10 15:04:11.701107 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 5 weeks ago 1.4GB 2026-01-10 15:04:11.701111 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 5 weeks ago 1.41GB 2026-01-10 15:04:11.701114 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 5 weeks ago 1.4GB 2026-01-10 15:04:11.701118 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 5 weeks ago 840MB 2026-01-10 15:04:11.701122 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 5 weeks ago 840MB 2026-01-10 15:04:11.701126 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 5 weeks ago 840MB 2026-01-10 15:04:11.701134 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 5 weeks ago 840MB 2026-01-10 15:04:11.701138 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 8 months ago 1.27GB 2026-01-10 15:04:12.048492 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-01-10 15:04:12.049306 | orchestrator | ++ semver 9.5.0 5.0.0 2026-01-10 15:04:12.108509 | orchestrator | 2026-01-10 15:04:12.108587 | orchestrator | ## Containers @ testbed-node-1 2026-01-10 15:04:12.108595 | orchestrator | 2026-01-10 15:04:12.108601 | orchestrator | + [[ 1 -eq -1 ]] 2026-01-10 15:04:12.108608 | orchestrator | + echo 2026-01-10 15:04:12.108640 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-01-10 15:04:12.108653 | orchestrator | + echo 2026-01-10 15:04:12.108659 | orchestrator | + osism container testbed-node-1 ps 2026-01-10 15:04:14.488877 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-01-10 15:04:14.488958 | orchestrator | eb81690ec6e2 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-01-10 15:04:14.488965 | orchestrator | cb31590080e4 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-01-10 15:04:14.488970 | orchestrator | 8c1af3ae03d6 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-01-10 15:04:14.488975 | orchestrator | 72d52965e2ea registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2026-01-10 15:04:14.488992 | orchestrator | 96445dbed479 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2026-01-10 15:04:14.489039 | orchestrator | 33b78b1a2183 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2026-01-10 15:04:14.489045 | orchestrator | bdf376b20936 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_conductor 2026-01-10 15:04:14.489051 | orchestrator | 3a94192d84a7 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2026-01-10 15:04:14.489059 | orchestrator | ba5bde02004d registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_api 2026-01-10 15:04:14.489063 | orchestrator | 923f20f482ae registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-01-10 15:04:14.489066 | orchestrator | e4e2bab95cb2 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_backup 2026-01-10 15:04:14.489073 | orchestrator | 3ee74341b898 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_volume 2026-01-10 15:04:14.489077 | orchestrator | 4ce8d15d549e registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2026-01-10 15:04:14.489081 | orchestrator | 24229473a103 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2026-01-10 15:04:14.489085 | orchestrator | 9cc0a90491c1 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2026-01-10 15:04:14.489088 | orchestrator | 4509851003f8 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_elasticsearch_exporter 2026-01-10 15:04:14.489094 | orchestrator | a05e4fe3a6cd registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2026-01-10 15:04:14.489099 | orchestrator | 7d365e334de4 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_memcached_exporter 2026-01-10 15:04:14.489106 | orchestrator | 95eaf212eed6 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_mysqld_exporter 2026-01-10 15:04:14.489125 | orchestrator | 479182f52314 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 13 minutes ago Up 12 minutes prometheus_node_exporter 2026-01-10 15:04:14.489131 | orchestrator | f833c550f862 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_conductor 2026-01-10 15:04:14.489136 | orchestrator | 04532a40f690 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_api 2026-01-10 15:04:14.489143 | orchestrator | 28410490b488 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2026-01-10 15:04:14.489158 | orchestrator | 630497bf2a22 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) placement_api 2026-01-10 15:04:14.489163 | orchestrator | cc7ad223792b registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_worker 2026-01-10 15:04:14.489169 | orchestrator | 2eac913a5577 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2026-01-10 15:04:14.489178 | orchestrator | 12b1429a6090 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2026-01-10 15:04:14.489184 | orchestrator | 288ab1032137 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2026-01-10 15:04:14.489192 | orchestrator | 8f920454b873 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2026-01-10 15:04:14.489195 | orchestrator | 0817434ddcc9 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_backend_bind9 2026-01-10 15:04:14.489199 | orchestrator | 4343703ac610 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_worker 2026-01-10 15:04:14.489203 | orchestrator | 87f6961a0f8f registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_keystone_listener 2026-01-10 15:04:14.489207 | orchestrator | 73073aad7b0d registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2026-01-10 15:04:14.489210 | orchestrator | 6f728c76a4aa registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-1 2026-01-10 15:04:14.489214 | orchestrator | 1866352775ef registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2026-01-10 15:04:14.489218 | orchestrator | e3a3c37d519b registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2026-01-10 15:04:14.489221 | orchestrator | 662c96ec50f9 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2026-01-10 15:04:14.489225 | orchestrator | db5900730a50 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2026-01-10 15:04:14.489229 | orchestrator | 1a8b9bb53ee6 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2026-01-10 15:04:14.489233 | orchestrator | 93849cb72ac8 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2026-01-10 15:04:14.489241 | orchestrator | c88438f5d5e0 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2026-01-10 15:04:14.489245 | orchestrator | 9822839187bd registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-1 2026-01-10 15:04:14.489253 | orchestrator | c69863fc3d87 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2026-01-10 15:04:14.489257 | orchestrator | 97edf91db606 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2026-01-10 15:04:14.489260 | orchestrator | 5bedfab79c1e registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2026-01-10 15:04:14.489264 | orchestrator | 1de8f2f8dd7e registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2026-01-10 15:04:14.489268 | orchestrator | a5e134a06016 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db 2026-01-10 15:04:14.489272 | orchestrator | bc9ddf512603 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_nb_db 2026-01-10 15:04:14.489275 | orchestrator | d67886f2a3a0 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2026-01-10 15:04:14.489279 | orchestrator | 7a46627a97c0 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2026-01-10 15:04:14.489283 | orchestrator | a4699e1e8617 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-1 2026-01-10 15:04:14.489289 | orchestrator | 32e9424e649e registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" 30 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2026-01-10 15:04:14.489293 | orchestrator | 563ee57da074 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2026-01-10 15:04:14.489296 | orchestrator | 78444d7d6d44 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2026-01-10 15:04:14.489300 | orchestrator | faeaac67ebaf registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2026-01-10 15:04:14.489304 | orchestrator | 76141d838e8d registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2026-01-10 15:04:14.489308 | orchestrator | da645b398465 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2026-01-10 15:04:14.489312 | orchestrator | 74deaeed7cab registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2026-01-10 15:04:14.489315 | orchestrator | cb4c9de07c1f registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2026-01-10 15:04:14.859359 | orchestrator | 2026-01-10 15:04:14.859442 | orchestrator | ## Images @ testbed-node-1 2026-01-10 15:04:14.859451 | orchestrator | 2026-01-10 15:04:14.859458 | orchestrator | + echo 2026-01-10 15:04:14.859484 | orchestrator | + echo '## Images @ testbed-node-1' 2026-01-10 15:04:14.859492 | orchestrator | + echo 2026-01-10 15:04:14.859498 | orchestrator | + osism container testbed-node-1 images 2026-01-10 15:04:17.264223 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-01-10 15:04:17.264299 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 5 weeks ago 322MB 2026-01-10 15:04:17.264305 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 5 weeks ago 266MB 2026-01-10 15:04:17.264309 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 5 weeks ago 1.56GB 2026-01-10 15:04:17.264314 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 5 weeks ago 276MB 2026-01-10 15:04:17.264318 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 5 weeks ago 1.53GB 2026-01-10 15:04:17.264321 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 5 weeks ago 669MB 2026-01-10 15:04:17.264325 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 5 weeks ago 265MB 2026-01-10 15:04:17.264329 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 5 weeks ago 1.02GB 2026-01-10 15:04:17.264333 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 5 weeks ago 412MB 2026-01-10 15:04:17.264336 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 5 weeks ago 274MB 2026-01-10 15:04:17.264340 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 5 weeks ago 578MB 2026-01-10 15:04:17.264344 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 5 weeks ago 273MB 2026-01-10 15:04:17.264348 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 5 weeks ago 273MB 2026-01-10 15:04:17.264352 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 5 weeks ago 452MB 2026-01-10 15:04:17.264356 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 5 weeks ago 1.15GB 2026-01-10 15:04:17.264360 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 5 weeks ago 301MB 2026-01-10 15:04:17.264364 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 5 weeks ago 298MB 2026-01-10 15:04:17.264367 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 5 weeks ago 357MB 2026-01-10 15:04:17.264371 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 5 weeks ago 292MB 2026-01-10 15:04:17.264375 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 5 weeks ago 305MB 2026-01-10 15:04:17.264379 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 5 weeks ago 279MB 2026-01-10 15:04:17.264385 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 5 weeks ago 975MB 2026-01-10 15:04:17.264391 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 5 weeks ago 279MB 2026-01-10 15:04:17.264397 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 5 weeks ago 1.37GB 2026-01-10 15:04:17.264403 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 5 weeks ago 1.21GB 2026-01-10 15:04:17.264429 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 5 weeks ago 1.21GB 2026-01-10 15:04:17.264435 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 5 weeks ago 1.21GB 2026-01-10 15:04:17.264441 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 5 weeks ago 1.13GB 2026-01-10 15:04:17.264447 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 5 weeks ago 1.24GB 2026-01-10 15:04:17.264453 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 5 weeks ago 991MB 2026-01-10 15:04:17.264471 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 5 weeks ago 991MB 2026-01-10 15:04:17.264537 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 5 weeks ago 990MB 2026-01-10 15:04:17.264543 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 5 weeks ago 1.09GB 2026-01-10 15:04:17.264547 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 5 weeks ago 1.04GB 2026-01-10 15:04:17.264551 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 5 weeks ago 1.04GB 2026-01-10 15:04:17.264555 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 5 weeks ago 1.03GB 2026-01-10 15:04:17.264559 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 5 weeks ago 1.03GB 2026-01-10 15:04:17.264563 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 5 weeks ago 1.05GB 2026-01-10 15:04:17.264566 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 5 weeks ago 1.03GB 2026-01-10 15:04:17.264570 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 5 weeks ago 1.05GB 2026-01-10 15:04:17.264574 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 5 weeks ago 1.16GB 2026-01-10 15:04:17.264577 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 5 weeks ago 1.1GB 2026-01-10 15:04:17.264581 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 5 weeks ago 983MB 2026-01-10 15:04:17.264585 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 5 weeks ago 989MB 2026-01-10 15:04:17.264604 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 5 weeks ago 984MB 2026-01-10 15:04:17.264608 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 5 weeks ago 984MB 2026-01-10 15:04:17.264612 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 5 weeks ago 989MB 2026-01-10 15:04:17.264616 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 5 weeks ago 984MB 2026-01-10 15:04:17.264656 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 5 weeks ago 1.72GB 2026-01-10 15:04:17.264666 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 5 weeks ago 1.4GB 2026-01-10 15:04:17.264672 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 5 weeks ago 1.41GB 2026-01-10 15:04:17.264687 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 5 weeks ago 1.4GB 2026-01-10 15:04:17.264693 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 5 weeks ago 840MB 2026-01-10 15:04:17.264700 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 5 weeks ago 840MB 2026-01-10 15:04:17.264705 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 5 weeks ago 840MB 2026-01-10 15:04:17.264709 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 5 weeks ago 840MB 2026-01-10 15:04:17.264713 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 8 months ago 1.27GB 2026-01-10 15:04:17.629428 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-01-10 15:04:17.629730 | orchestrator | ++ semver 9.5.0 5.0.0 2026-01-10 15:04:17.668394 | orchestrator | 2026-01-10 15:04:17.668480 | orchestrator | ## Containers @ testbed-node-2 2026-01-10 15:04:17.668506 | orchestrator | 2026-01-10 15:04:17.668520 | orchestrator | + [[ 1 -eq -1 ]] 2026-01-10 15:04:17.668535 | orchestrator | + echo 2026-01-10 15:04:17.668546 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-01-10 15:04:17.668554 | orchestrator | + echo 2026-01-10 15:04:17.668561 | orchestrator | + osism container testbed-node-2 ps 2026-01-10 15:04:20.042000 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-01-10 15:04:20.042159 | orchestrator | f51e799f64db registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-01-10 15:04:20.042172 | orchestrator | 739dbd59430f registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-01-10 15:04:20.042177 | orchestrator | 7c6cc41793a4 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-01-10 15:04:20.042181 | orchestrator | 76b582c866bd registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2026-01-10 15:04:20.042186 | orchestrator | 6781d7140c40 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2026-01-10 15:04:20.042190 | orchestrator | e012fa8881d5 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2026-01-10 15:04:20.042194 | orchestrator | f53dd44156bd registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_conductor 2026-01-10 15:04:20.042198 | orchestrator | f4bb70069096 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2026-01-10 15:04:20.042202 | orchestrator | 95b37de0fad5 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_api 2026-01-10 15:04:20.042205 | orchestrator | 3c987abe1727 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-01-10 15:04:20.042209 | orchestrator | 13c1cc214357 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_backup 2026-01-10 15:04:20.042229 | orchestrator | 2f0b34dd34e0 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_volume 2026-01-10 15:04:20.042239 | orchestrator | 718957af8227 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2026-01-10 15:04:20.042243 | orchestrator | 1279b555b9bc registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2026-01-10 15:04:20.042247 | orchestrator | 8b9753e0f729 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2026-01-10 15:04:20.042251 | orchestrator | 071c99e3e50d registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_elasticsearch_exporter 2026-01-10 15:04:20.042257 | orchestrator | 23ed81c46aef registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2026-01-10 15:04:20.042261 | orchestrator | 3dd07873d55c registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_memcached_exporter 2026-01-10 15:04:20.042266 | orchestrator | 3b7bf9cc74bd registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_mysqld_exporter 2026-01-10 15:04:20.042282 | orchestrator | ec2b36ce18d7 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2026-01-10 15:04:20.042288 | orchestrator | 0521d00b519d registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_conductor 2026-01-10 15:04:20.042294 | orchestrator | 5e95a0dc8911 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_api 2026-01-10 15:04:20.042300 | orchestrator | 7fa74ee3df27 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2026-01-10 15:04:20.042306 | orchestrator | 2b1dee807aec registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) placement_api 2026-01-10 15:04:20.042312 | orchestrator | c397cea0392d registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_worker 2026-01-10 15:04:20.042318 | orchestrator | bc0317ec6e76 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2026-01-10 15:04:20.042326 | orchestrator | e093dc312af0 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2026-01-10 15:04:20.042330 | orchestrator | a8f21c35f4b3 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2026-01-10 15:04:20.042334 | orchestrator | 0fd73e2d39be registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2026-01-10 15:04:20.042343 | orchestrator | 6c6d91c22c56 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_backend_bind9 2026-01-10 15:04:20.042349 | orchestrator | f066139abb1b registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_worker 2026-01-10 15:04:20.042354 | orchestrator | 8e4ff260bc0f registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_keystone_listener 2026-01-10 15:04:20.042360 | orchestrator | 78342be8d8aa registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2026-01-10 15:04:20.042366 | orchestrator | 076260ca0d63 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-2 2026-01-10 15:04:20.042372 | orchestrator | 11d2a8a65c41 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2026-01-10 15:04:20.042379 | orchestrator | b240c3aa7618 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2026-01-10 15:04:20.042385 | orchestrator | 03bee3e7c9ad registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2026-01-10 15:04:20.042390 | orchestrator | f02ea3d7b3b1 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2026-01-10 15:04:20.042396 | orchestrator | ec1af5a2380b registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2026-01-10 15:04:20.042403 | orchestrator | b17d6043e933 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2026-01-10 15:04:20.042412 | orchestrator | e57418024dc5 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2026-01-10 15:04:20.042416 | orchestrator | 049e5cac17b4 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-2 2026-01-10 15:04:20.042420 | orchestrator | 46499e772fe7 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2026-01-10 15:04:20.042424 | orchestrator | b6e2f41bf9fa registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2026-01-10 15:04:20.042427 | orchestrator | 64e03498cdbf registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2026-01-10 15:04:20.042431 | orchestrator | 63d5cf67a9bf registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2026-01-10 15:04:20.042435 | orchestrator | 66d721ee0815 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db 2026-01-10 15:04:20.042446 | orchestrator | 7ec3a5e3d0db registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_nb_db 2026-01-10 15:04:20.042450 | orchestrator | 1803ed7ae3ba registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2026-01-10 15:04:20.042454 | orchestrator | 972ba16de267 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2026-01-10 15:04:20.042458 | orchestrator | 83ac5bb6fee6 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-2 2026-01-10 15:04:20.042462 | orchestrator | c5c1e786d70a registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" 30 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2026-01-10 15:04:20.042468 | orchestrator | d0d2a6bd7f93 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2026-01-10 15:04:20.042473 | orchestrator | e367d63a81d1 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2026-01-10 15:04:20.042479 | orchestrator | cc0a978b7dcc registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2026-01-10 15:04:20.042485 | orchestrator | 7df848651215 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2026-01-10 15:04:20.042490 | orchestrator | e7841952399f registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 31 minutes ago Up 30 minutes cron 2026-01-10 15:04:20.042496 | orchestrator | 05b26a8c5007 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2026-01-10 15:04:20.042502 | orchestrator | b31a3fc5db69 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2026-01-10 15:04:20.456025 | orchestrator | 2026-01-10 15:04:20.456108 | orchestrator | ## Images @ testbed-node-2 2026-01-10 15:04:20.456118 | orchestrator | 2026-01-10 15:04:20.456125 | orchestrator | + echo 2026-01-10 15:04:20.456132 | orchestrator | + echo '## Images @ testbed-node-2' 2026-01-10 15:04:20.456140 | orchestrator | + echo 2026-01-10 15:04:20.456146 | orchestrator | + osism container testbed-node-2 images 2026-01-10 15:04:22.953059 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-01-10 15:04:22.953139 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 5 weeks ago 322MB 2026-01-10 15:04:22.953146 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 5 weeks ago 266MB 2026-01-10 15:04:22.953152 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 5 weeks ago 1.56GB 2026-01-10 15:04:22.953156 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 5 weeks ago 1.53GB 2026-01-10 15:04:22.953161 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 5 weeks ago 276MB 2026-01-10 15:04:22.953166 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 5 weeks ago 669MB 2026-01-10 15:04:22.953186 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 5 weeks ago 265MB 2026-01-10 15:04:22.953191 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 5 weeks ago 1.02GB 2026-01-10 15:04:22.953195 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 5 weeks ago 412MB 2026-01-10 15:04:22.953200 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 5 weeks ago 274MB 2026-01-10 15:04:22.953204 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 5 weeks ago 578MB 2026-01-10 15:04:22.953209 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 5 weeks ago 273MB 2026-01-10 15:04:22.953213 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 5 weeks ago 273MB 2026-01-10 15:04:22.953218 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 5 weeks ago 452MB 2026-01-10 15:04:22.953276 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 5 weeks ago 1.15GB 2026-01-10 15:04:22.953281 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 5 weeks ago 301MB 2026-01-10 15:04:22.953286 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 5 weeks ago 298MB 2026-01-10 15:04:22.953290 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 5 weeks ago 357MB 2026-01-10 15:04:22.953295 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 5 weeks ago 292MB 2026-01-10 15:04:22.953350 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 5 weeks ago 305MB 2026-01-10 15:04:22.953357 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 5 weeks ago 279MB 2026-01-10 15:04:22.953362 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 5 weeks ago 975MB 2026-01-10 15:04:22.953366 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 5 weeks ago 279MB 2026-01-10 15:04:22.953371 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 5 weeks ago 1.37GB 2026-01-10 15:04:22.953376 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 5 weeks ago 1.21GB 2026-01-10 15:04:22.953380 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 5 weeks ago 1.21GB 2026-01-10 15:04:22.953385 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 5 weeks ago 1.21GB 2026-01-10 15:04:22.953389 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 5 weeks ago 1.13GB 2026-01-10 15:04:22.953394 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 5 weeks ago 1.24GB 2026-01-10 15:04:22.953398 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 5 weeks ago 991MB 2026-01-10 15:04:22.953403 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 5 weeks ago 991MB 2026-01-10 15:04:22.953407 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 5 weeks ago 990MB 2026-01-10 15:04:22.953412 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 5 weeks ago 1.09GB 2026-01-10 15:04:22.953422 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 5 weeks ago 1.04GB 2026-01-10 15:04:22.953427 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 5 weeks ago 1.04GB 2026-01-10 15:04:22.953431 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 5 weeks ago 1.03GB 2026-01-10 15:04:22.953436 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 5 weeks ago 1.03GB 2026-01-10 15:04:22.953440 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 5 weeks ago 1.05GB 2026-01-10 15:04:22.953445 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 5 weeks ago 1.03GB 2026-01-10 15:04:22.953449 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 5 weeks ago 1.05GB 2026-01-10 15:04:22.953454 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 5 weeks ago 1.16GB 2026-01-10 15:04:22.953458 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 5 weeks ago 1.1GB 2026-01-10 15:04:22.953474 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 5 weeks ago 983MB 2026-01-10 15:04:22.953482 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 5 weeks ago 989MB 2026-01-10 15:04:22.953490 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 5 weeks ago 984MB 2026-01-10 15:04:22.953497 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 5 weeks ago 984MB 2026-01-10 15:04:22.953505 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 5 weeks ago 989MB 2026-01-10 15:04:22.953512 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 5 weeks ago 984MB 2026-01-10 15:04:22.953520 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 5 weeks ago 1.72GB 2026-01-10 15:04:22.953527 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 5 weeks ago 1.4GB 2026-01-10 15:04:22.953535 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 5 weeks ago 1.41GB 2026-01-10 15:04:22.953551 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 5 weeks ago 1.4GB 2026-01-10 15:04:22.953559 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 5 weeks ago 840MB 2026-01-10 15:04:22.953573 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 5 weeks ago 840MB 2026-01-10 15:04:22.953581 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 5 weeks ago 840MB 2026-01-10 15:04:22.953589 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 5 weeks ago 840MB 2026-01-10 15:04:22.953599 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 8 months ago 1.27GB 2026-01-10 15:04:23.334052 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-01-10 15:04:23.342724 | orchestrator | + set -e 2026-01-10 15:04:23.342813 | orchestrator | + source /opt/manager-vars.sh 2026-01-10 15:04:23.344318 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-10 15:04:23.344367 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-10 15:04:23.344399 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-10 15:04:23.344406 | orchestrator | ++ CEPH_VERSION=reef 2026-01-10 15:04:23.344417 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-10 15:04:23.344426 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-10 15:04:23.344434 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-10 15:04:23.344441 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-10 15:04:23.344448 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-10 15:04:23.344455 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-10 15:04:23.344461 | orchestrator | ++ export ARA=false 2026-01-10 15:04:23.344467 | orchestrator | ++ ARA=false 2026-01-10 15:04:23.344473 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-10 15:04:23.344479 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-10 15:04:23.344485 | orchestrator | ++ export TEMPEST=false 2026-01-10 15:04:23.344492 | orchestrator | ++ TEMPEST=false 2026-01-10 15:04:23.344498 | orchestrator | ++ export IS_ZUUL=true 2026-01-10 15:04:23.344504 | orchestrator | ++ IS_ZUUL=true 2026-01-10 15:04:23.344511 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.106 2026-01-10 15:04:23.344518 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.106 2026-01-10 15:04:23.344525 | orchestrator | ++ export EXTERNAL_API=false 2026-01-10 15:04:23.344531 | orchestrator | ++ EXTERNAL_API=false 2026-01-10 15:04:23.344538 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-10 15:04:23.344544 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-10 15:04:23.344551 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-10 15:04:23.344557 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-10 15:04:23.344564 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-10 15:04:23.344572 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-10 15:04:23.344628 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-10 15:04:23.344687 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-01-10 15:04:23.355758 | orchestrator | + set -e 2026-01-10 15:04:23.356797 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-10 15:04:23.356847 | orchestrator | ++ export INTERACTIVE=false 2026-01-10 15:04:23.356855 | orchestrator | ++ INTERACTIVE=false 2026-01-10 15:04:23.356861 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-10 15:04:23.356867 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-10 15:04:23.356873 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-01-10 15:04:23.356882 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-01-10 15:04:23.360983 | orchestrator | 2026-01-10 15:04:23.361040 | orchestrator | # Ceph status 2026-01-10 15:04:23.361046 | orchestrator | 2026-01-10 15:04:23.361050 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-10 15:04:23.361055 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-10 15:04:23.361060 | orchestrator | + echo 2026-01-10 15:04:23.361064 | orchestrator | + echo '# Ceph status' 2026-01-10 15:04:23.361068 | orchestrator | + echo 2026-01-10 15:04:23.361072 | orchestrator | + ceph -s 2026-01-10 15:04:23.925555 | orchestrator | cluster: 2026-01-10 15:04:23.925684 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-01-10 15:04:23.925697 | orchestrator | health: HEALTH_OK 2026-01-10 15:04:23.925704 | orchestrator | 2026-01-10 15:04:23.925711 | orchestrator | services: 2026-01-10 15:04:23.925718 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 28m) 2026-01-10 15:04:23.925736 | orchestrator | mgr: testbed-node-1(active, since 16m), standbys: testbed-node-2, testbed-node-0 2026-01-10 15:04:23.925744 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-01-10 15:04:23.925751 | orchestrator | osd: 6 osds: 6 up (since 24m), 6 in (since 25m) 2026-01-10 15:04:23.925758 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-01-10 15:04:23.925764 | orchestrator | 2026-01-10 15:04:23.925771 | orchestrator | data: 2026-01-10 15:04:23.925777 | orchestrator | volumes: 1/1 healthy 2026-01-10 15:04:23.925783 | orchestrator | pools: 14 pools, 417 pgs 2026-01-10 15:04:23.925790 | orchestrator | objects: 521 objects, 2.2 GiB 2026-01-10 15:04:23.925796 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-01-10 15:04:23.925802 | orchestrator | pgs: 417 active+clean 2026-01-10 15:04:23.925808 | orchestrator | 2026-01-10 15:04:23.979710 | orchestrator | 2026-01-10 15:04:23.979795 | orchestrator | + echo 2026-01-10 15:04:23.979803 | orchestrator | + echo '# Ceph versions' 2026-01-10 15:04:23.980355 | orchestrator | # Ceph versions 2026-01-10 15:04:23.980410 | orchestrator | 2026-01-10 15:04:23.980417 | orchestrator | + echo 2026-01-10 15:04:23.980421 | orchestrator | + ceph versions 2026-01-10 15:04:24.592982 | orchestrator | { 2026-01-10 15:04:24.593091 | orchestrator | "mon": { 2026-01-10 15:04:24.593103 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-01-10 15:04:24.593137 | orchestrator | }, 2026-01-10 15:04:24.593144 | orchestrator | "mgr": { 2026-01-10 15:04:24.593151 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-01-10 15:04:24.593157 | orchestrator | }, 2026-01-10 15:04:24.593164 | orchestrator | "osd": { 2026-01-10 15:04:24.593171 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-01-10 15:04:24.593178 | orchestrator | }, 2026-01-10 15:04:24.593185 | orchestrator | "mds": { 2026-01-10 15:04:24.593192 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-01-10 15:04:24.593198 | orchestrator | }, 2026-01-10 15:04:24.593204 | orchestrator | "rgw": { 2026-01-10 15:04:24.593212 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-01-10 15:04:24.593219 | orchestrator | }, 2026-01-10 15:04:24.593226 | orchestrator | "overall": { 2026-01-10 15:04:24.593234 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-01-10 15:04:24.593241 | orchestrator | } 2026-01-10 15:04:24.593247 | orchestrator | } 2026-01-10 15:04:24.643049 | orchestrator | 2026-01-10 15:04:24.643127 | orchestrator | # Ceph OSD tree 2026-01-10 15:04:24.643136 | orchestrator | 2026-01-10 15:04:24.643144 | orchestrator | + echo 2026-01-10 15:04:24.643149 | orchestrator | + echo '# Ceph OSD tree' 2026-01-10 15:04:24.643153 | orchestrator | + echo 2026-01-10 15:04:24.643158 | orchestrator | + ceph osd df tree 2026-01-10 15:04:25.190760 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-01-10 15:04:25.190859 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2026-01-10 15:04:25.190867 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2026-01-10 15:04:25.190872 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.5 GiB 1 KiB 70 MiB 18 GiB 7.63 1.29 208 up osd.0 2026-01-10 15:04:25.190876 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 860 MiB 787 MiB 1 KiB 74 MiB 19 GiB 4.20 0.71 198 up osd.4 2026-01-10 15:04:25.190880 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2026-01-10 15:04:25.190886 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 885 MiB 811 MiB 1 KiB 74 MiB 19 GiB 4.33 0.73 184 up osd.1 2026-01-10 15:04:25.190892 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.4 GiB 1 KiB 70 MiB 18 GiB 7.51 1.27 224 up osd.3 2026-01-10 15:04:25.190898 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2026-01-10 15:04:25.190904 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 6.84 1.16 198 up osd.2 2026-01-10 15:04:25.190910 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1021 MiB 947 MiB 1 KiB 74 MiB 19 GiB 4.99 0.84 206 up osd.5 2026-01-10 15:04:25.190917 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2026-01-10 15:04:25.190923 | orchestrator | MIN/MAX VAR: 0.71/1.29 STDDEV: 1.45 2026-01-10 15:04:25.245214 | orchestrator | 2026-01-10 15:04:25.245285 | orchestrator | # Ceph monitor status 2026-01-10 15:04:25.245291 | orchestrator | 2026-01-10 15:04:25.245295 | orchestrator | + echo 2026-01-10 15:04:25.245300 | orchestrator | + echo '# Ceph monitor status' 2026-01-10 15:04:25.245304 | orchestrator | + echo 2026-01-10 15:04:25.245308 | orchestrator | + ceph mon stat 2026-01-10 15:04:25.830946 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-01-10 15:04:25.875699 | orchestrator | 2026-01-10 15:04:25.875769 | orchestrator | # Ceph quorum status 2026-01-10 15:04:25.875776 | orchestrator | 2026-01-10 15:04:25.875780 | orchestrator | + echo 2026-01-10 15:04:25.875785 | orchestrator | + echo '# Ceph quorum status' 2026-01-10 15:04:25.875789 | orchestrator | + echo 2026-01-10 15:04:25.876082 | orchestrator | + ceph quorum_status 2026-01-10 15:04:25.876093 | orchestrator | + jq 2026-01-10 15:04:26.481607 | orchestrator | { 2026-01-10 15:04:26.481709 | orchestrator | "election_epoch": 8, 2026-01-10 15:04:26.481716 | orchestrator | "quorum": [ 2026-01-10 15:04:26.481721 | orchestrator | 0, 2026-01-10 15:04:26.481725 | orchestrator | 1, 2026-01-10 15:04:26.481729 | orchestrator | 2 2026-01-10 15:04:26.481733 | orchestrator | ], 2026-01-10 15:04:26.481737 | orchestrator | "quorum_names": [ 2026-01-10 15:04:26.481741 | orchestrator | "testbed-node-0", 2026-01-10 15:04:26.481745 | orchestrator | "testbed-node-1", 2026-01-10 15:04:26.481748 | orchestrator | "testbed-node-2" 2026-01-10 15:04:26.481752 | orchestrator | ], 2026-01-10 15:04:26.481756 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-01-10 15:04:26.481761 | orchestrator | "quorum_age": 1733, 2026-01-10 15:04:26.481765 | orchestrator | "features": { 2026-01-10 15:04:26.481770 | orchestrator | "quorum_con": "4540138322906710015", 2026-01-10 15:04:26.481773 | orchestrator | "quorum_mon": [ 2026-01-10 15:04:26.481777 | orchestrator | "kraken", 2026-01-10 15:04:26.481781 | orchestrator | "luminous", 2026-01-10 15:04:26.481785 | orchestrator | "mimic", 2026-01-10 15:04:26.481819 | orchestrator | "osdmap-prune", 2026-01-10 15:04:26.481824 | orchestrator | "nautilus", 2026-01-10 15:04:26.481828 | orchestrator | "octopus", 2026-01-10 15:04:26.481831 | orchestrator | "pacific", 2026-01-10 15:04:26.481835 | orchestrator | "elector-pinging", 2026-01-10 15:04:26.481839 | orchestrator | "quincy", 2026-01-10 15:04:26.481843 | orchestrator | "reef" 2026-01-10 15:04:26.481846 | orchestrator | ] 2026-01-10 15:04:26.481850 | orchestrator | }, 2026-01-10 15:04:26.481854 | orchestrator | "monmap": { 2026-01-10 15:04:26.481858 | orchestrator | "epoch": 1, 2026-01-10 15:04:26.481862 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-01-10 15:04:26.481866 | orchestrator | "modified": "2026-01-10T14:35:14.322167Z", 2026-01-10 15:04:26.481870 | orchestrator | "created": "2026-01-10T14:35:14.322167Z", 2026-01-10 15:04:26.481874 | orchestrator | "min_mon_release": 18, 2026-01-10 15:04:26.481878 | orchestrator | "min_mon_release_name": "reef", 2026-01-10 15:04:26.481882 | orchestrator | "election_strategy": 1, 2026-01-10 15:04:26.481886 | orchestrator | "disallowed_leaders: ": "", 2026-01-10 15:04:26.481904 | orchestrator | "stretch_mode": false, 2026-01-10 15:04:26.481909 | orchestrator | "tiebreaker_mon": "", 2026-01-10 15:04:26.481912 | orchestrator | "removed_ranks: ": "", 2026-01-10 15:04:26.481916 | orchestrator | "features": { 2026-01-10 15:04:26.481927 | orchestrator | "persistent": [ 2026-01-10 15:04:26.481931 | orchestrator | "kraken", 2026-01-10 15:04:26.481934 | orchestrator | "luminous", 2026-01-10 15:04:26.481938 | orchestrator | "mimic", 2026-01-10 15:04:26.481942 | orchestrator | "osdmap-prune", 2026-01-10 15:04:26.481951 | orchestrator | "nautilus", 2026-01-10 15:04:26.481955 | orchestrator | "octopus", 2026-01-10 15:04:26.481958 | orchestrator | "pacific", 2026-01-10 15:04:26.481962 | orchestrator | "elector-pinging", 2026-01-10 15:04:26.481966 | orchestrator | "quincy", 2026-01-10 15:04:26.481969 | orchestrator | "reef" 2026-01-10 15:04:26.481973 | orchestrator | ], 2026-01-10 15:04:26.481977 | orchestrator | "optional": [] 2026-01-10 15:04:26.481980 | orchestrator | }, 2026-01-10 15:04:26.481984 | orchestrator | "mons": [ 2026-01-10 15:04:26.481988 | orchestrator | { 2026-01-10 15:04:26.481991 | orchestrator | "rank": 0, 2026-01-10 15:04:26.481995 | orchestrator | "name": "testbed-node-0", 2026-01-10 15:04:26.481999 | orchestrator | "public_addrs": { 2026-01-10 15:04:26.482003 | orchestrator | "addrvec": [ 2026-01-10 15:04:26.482006 | orchestrator | { 2026-01-10 15:04:26.482010 | orchestrator | "type": "v2", 2026-01-10 15:04:26.482050 | orchestrator | "addr": "192.168.16.10:3300", 2026-01-10 15:04:26.482055 | orchestrator | "nonce": 0 2026-01-10 15:04:26.482059 | orchestrator | }, 2026-01-10 15:04:26.482063 | orchestrator | { 2026-01-10 15:04:26.482067 | orchestrator | "type": "v1", 2026-01-10 15:04:26.482070 | orchestrator | "addr": "192.168.16.10:6789", 2026-01-10 15:04:26.482074 | orchestrator | "nonce": 0 2026-01-10 15:04:26.482078 | orchestrator | } 2026-01-10 15:04:26.482081 | orchestrator | ] 2026-01-10 15:04:26.482085 | orchestrator | }, 2026-01-10 15:04:26.482089 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-01-10 15:04:26.482093 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-01-10 15:04:26.482117 | orchestrator | "priority": 0, 2026-01-10 15:04:26.482121 | orchestrator | "weight": 0, 2026-01-10 15:04:26.482124 | orchestrator | "crush_location": "{}" 2026-01-10 15:04:26.482128 | orchestrator | }, 2026-01-10 15:04:26.482132 | orchestrator | { 2026-01-10 15:04:26.482135 | orchestrator | "rank": 1, 2026-01-10 15:04:26.482139 | orchestrator | "name": "testbed-node-1", 2026-01-10 15:04:26.482143 | orchestrator | "public_addrs": { 2026-01-10 15:04:26.482147 | orchestrator | "addrvec": [ 2026-01-10 15:04:26.482150 | orchestrator | { 2026-01-10 15:04:26.482154 | orchestrator | "type": "v2", 2026-01-10 15:04:26.482158 | orchestrator | "addr": "192.168.16.11:3300", 2026-01-10 15:04:26.482162 | orchestrator | "nonce": 0 2026-01-10 15:04:26.482178 | orchestrator | }, 2026-01-10 15:04:26.482183 | orchestrator | { 2026-01-10 15:04:26.482187 | orchestrator | "type": "v1", 2026-01-10 15:04:26.482191 | orchestrator | "addr": "192.168.16.11:6789", 2026-01-10 15:04:26.482196 | orchestrator | "nonce": 0 2026-01-10 15:04:26.482201 | orchestrator | } 2026-01-10 15:04:26.482205 | orchestrator | ] 2026-01-10 15:04:26.482209 | orchestrator | }, 2026-01-10 15:04:26.482213 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-01-10 15:04:26.482218 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-01-10 15:04:26.482222 | orchestrator | "priority": 0, 2026-01-10 15:04:26.482226 | orchestrator | "weight": 0, 2026-01-10 15:04:26.482230 | orchestrator | "crush_location": "{}" 2026-01-10 15:04:26.482235 | orchestrator | }, 2026-01-10 15:04:26.482239 | orchestrator | { 2026-01-10 15:04:26.482244 | orchestrator | "rank": 2, 2026-01-10 15:04:26.482248 | orchestrator | "name": "testbed-node-2", 2026-01-10 15:04:26.482252 | orchestrator | "public_addrs": { 2026-01-10 15:04:26.482257 | orchestrator | "addrvec": [ 2026-01-10 15:04:26.482261 | orchestrator | { 2026-01-10 15:04:26.482265 | orchestrator | "type": "v2", 2026-01-10 15:04:26.482269 | orchestrator | "addr": "192.168.16.12:3300", 2026-01-10 15:04:26.482274 | orchestrator | "nonce": 0 2026-01-10 15:04:26.482278 | orchestrator | }, 2026-01-10 15:04:26.482282 | orchestrator | { 2026-01-10 15:04:26.482286 | orchestrator | "type": "v1", 2026-01-10 15:04:26.482290 | orchestrator | "addr": "192.168.16.12:6789", 2026-01-10 15:04:26.482295 | orchestrator | "nonce": 0 2026-01-10 15:04:26.482299 | orchestrator | } 2026-01-10 15:04:26.482303 | orchestrator | ] 2026-01-10 15:04:26.482307 | orchestrator | }, 2026-01-10 15:04:26.482312 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-01-10 15:04:26.482316 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-01-10 15:04:26.482323 | orchestrator | "priority": 0, 2026-01-10 15:04:26.482328 | orchestrator | "weight": 0, 2026-01-10 15:04:26.482332 | orchestrator | "crush_location": "{}" 2026-01-10 15:04:26.482336 | orchestrator | } 2026-01-10 15:04:26.482340 | orchestrator | ] 2026-01-10 15:04:26.482345 | orchestrator | } 2026-01-10 15:04:26.482349 | orchestrator | } 2026-01-10 15:04:26.482361 | orchestrator | 2026-01-10 15:04:26.482365 | orchestrator | # Ceph free space status 2026-01-10 15:04:26.482369 | orchestrator | 2026-01-10 15:04:26.482374 | orchestrator | + echo 2026-01-10 15:04:26.482378 | orchestrator | + echo '# Ceph free space status' 2026-01-10 15:04:26.482383 | orchestrator | + echo 2026-01-10 15:04:26.482387 | orchestrator | + ceph df 2026-01-10 15:04:27.099777 | orchestrator | --- RAW STORAGE --- 2026-01-10 15:04:27.099873 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-01-10 15:04:27.099893 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2026-01-10 15:04:27.099900 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2026-01-10 15:04:27.099906 | orchestrator | 2026-01-10 15:04:27.099912 | orchestrator | --- POOLS --- 2026-01-10 15:04:27.099919 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-01-10 15:04:27.099927 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2026-01-10 15:04:27.099933 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-01-10 15:04:27.099939 | orchestrator | cephfs_metadata 3 32 4.4 KiB 22 96 KiB 0 35 GiB 2026-01-10 15:04:27.099946 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-01-10 15:04:27.099953 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-01-10 15:04:27.099986 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-01-10 15:04:27.099992 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2026-01-10 15:04:27.099998 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-01-10 15:04:27.100004 | orchestrator | .rgw.root 9 32 2.2 KiB 5 40 KiB 0 52 GiB 2026-01-10 15:04:27.100010 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-01-10 15:04:27.100017 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-01-10 15:04:27.100022 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.98 35 GiB 2026-01-10 15:04:27.100029 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-01-10 15:04:27.100035 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-01-10 15:04:27.148125 | orchestrator | ++ semver 9.5.0 5.0.0 2026-01-10 15:04:27.208855 | orchestrator | + [[ 1 -eq -1 ]] 2026-01-10 15:04:27.208923 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2026-01-10 15:04:27.208931 | orchestrator | + osism apply facts 2026-01-10 15:04:29.364622 | orchestrator | 2026-01-10 15:04:29 | INFO  | Task c28d611c-2734-4c57-9c61-04338a5b1ee8 (facts) was prepared for execution. 2026-01-10 15:04:29.364762 | orchestrator | 2026-01-10 15:04:29 | INFO  | It takes a moment until task c28d611c-2734-4c57-9c61-04338a5b1ee8 (facts) has been started and output is visible here. 2026-01-10 15:04:43.294411 | orchestrator | 2026-01-10 15:04:43.294516 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-10 15:04:43.294524 | orchestrator | 2026-01-10 15:04:43.294530 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-10 15:04:43.294534 | orchestrator | Saturday 10 January 2026 15:04:33 +0000 (0:00:00.289) 0:00:00.289 ****** 2026-01-10 15:04:43.294539 | orchestrator | ok: [testbed-manager] 2026-01-10 15:04:43.294544 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:04:43.294548 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:04:43.294552 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:04:43.294557 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:04:43.294560 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:04:43.294565 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:04:43.294568 | orchestrator | 2026-01-10 15:04:43.294572 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-10 15:04:43.294576 | orchestrator | Saturday 10 January 2026 15:04:35 +0000 (0:00:01.574) 0:00:01.864 ****** 2026-01-10 15:04:43.294580 | orchestrator | skipping: [testbed-manager] 2026-01-10 15:04:43.294585 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:04:43.294589 | orchestrator | skipping: [testbed-node-1] 2026-01-10 15:04:43.294593 | orchestrator | skipping: [testbed-node-2] 2026-01-10 15:04:43.294597 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:04:43.294601 | orchestrator | skipping: [testbed-node-4] 2026-01-10 15:04:43.294605 | orchestrator | skipping: [testbed-node-5] 2026-01-10 15:04:43.294609 | orchestrator | 2026-01-10 15:04:43.294613 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-10 15:04:43.294617 | orchestrator | 2026-01-10 15:04:43.294620 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-10 15:04:43.294624 | orchestrator | Saturday 10 January 2026 15:04:36 +0000 (0:00:01.361) 0:00:03.226 ****** 2026-01-10 15:04:43.294628 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:04:43.294632 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:04:43.294636 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:04:43.294639 | orchestrator | ok: [testbed-manager] 2026-01-10 15:04:43.294643 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:04:43.294647 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:04:43.294651 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:04:43.294655 | orchestrator | 2026-01-10 15:04:43.294658 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-10 15:04:43.294677 | orchestrator | 2026-01-10 15:04:43.294708 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-10 15:04:43.294719 | orchestrator | Saturday 10 January 2026 15:04:42 +0000 (0:00:05.558) 0:00:08.784 ****** 2026-01-10 15:04:43.294723 | orchestrator | skipping: [testbed-manager] 2026-01-10 15:04:43.294726 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:04:43.294730 | orchestrator | skipping: [testbed-node-1] 2026-01-10 15:04:43.294734 | orchestrator | skipping: [testbed-node-2] 2026-01-10 15:04:43.294737 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:04:43.294741 | orchestrator | skipping: [testbed-node-4] 2026-01-10 15:04:43.294745 | orchestrator | skipping: [testbed-node-5] 2026-01-10 15:04:43.294748 | orchestrator | 2026-01-10 15:04:43.294752 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 15:04:43.294756 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 15:04:43.294772 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 15:04:43.294776 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 15:04:43.294780 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 15:04:43.294784 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 15:04:43.294788 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 15:04:43.294791 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 15:04:43.294795 | orchestrator | 2026-01-10 15:04:43.294801 | orchestrator | 2026-01-10 15:04:43.294807 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 15:04:43.294813 | orchestrator | Saturday 10 January 2026 15:04:42 +0000 (0:00:00.541) 0:00:09.326 ****** 2026-01-10 15:04:43.294818 | orchestrator | =============================================================================== 2026-01-10 15:04:43.294823 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.56s 2026-01-10 15:04:43.294829 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.57s 2026-01-10 15:04:43.294835 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.36s 2026-01-10 15:04:43.294840 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2026-01-10 15:04:43.639359 | orchestrator | + osism validate ceph-mons 2026-01-10 15:05:16.468215 | orchestrator | 2026-01-10 15:05:16.468339 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-01-10 15:05:16.468347 | orchestrator | 2026-01-10 15:05:16.468353 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-01-10 15:05:16.468358 | orchestrator | Saturday 10 January 2026 15:05:00 +0000 (0:00:00.456) 0:00:00.456 ****** 2026-01-10 15:05:16.468363 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:05:16.468368 | orchestrator | 2026-01-10 15:05:16.468372 | orchestrator | TASK [Create report output directory] ****************************************** 2026-01-10 15:05:16.468376 | orchestrator | Saturday 10 January 2026 15:05:01 +0000 (0:00:00.860) 0:00:01.317 ****** 2026-01-10 15:05:16.468380 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:05:16.468384 | orchestrator | 2026-01-10 15:05:16.468387 | orchestrator | TASK [Define report vars] ****************************************************** 2026-01-10 15:05:16.468391 | orchestrator | Saturday 10 January 2026 15:05:02 +0000 (0:00:01.002) 0:00:02.319 ****** 2026-01-10 15:05:16.468395 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:05:16.468420 | orchestrator | 2026-01-10 15:05:16.468424 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-01-10 15:05:16.468428 | orchestrator | Saturday 10 January 2026 15:05:02 +0000 (0:00:00.144) 0:00:02.463 ****** 2026-01-10 15:05:16.468432 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:05:16.468443 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:05:16.468447 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:05:16.468451 | orchestrator | 2026-01-10 15:05:16.468455 | orchestrator | TASK [Get container info] ****************************************************** 2026-01-10 15:05:16.468458 | orchestrator | Saturday 10 January 2026 15:05:02 +0000 (0:00:00.319) 0:00:02.783 ****** 2026-01-10 15:05:16.468462 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:05:16.468466 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:05:16.468469 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:05:16.468473 | orchestrator | 2026-01-10 15:05:16.468477 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-01-10 15:05:16.468481 | orchestrator | Saturday 10 January 2026 15:05:03 +0000 (0:00:00.953) 0:00:03.736 ****** 2026-01-10 15:05:16.468485 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:05:16.468489 | orchestrator | skipping: [testbed-node-1] 2026-01-10 15:05:16.468493 | orchestrator | skipping: [testbed-node-2] 2026-01-10 15:05:16.468497 | orchestrator | 2026-01-10 15:05:16.468500 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-01-10 15:05:16.468504 | orchestrator | Saturday 10 January 2026 15:05:04 +0000 (0:00:00.264) 0:00:04.001 ****** 2026-01-10 15:05:16.468508 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:05:16.468512 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:05:16.468516 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:05:16.468520 | orchestrator | 2026-01-10 15:05:16.468524 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-10 15:05:16.468528 | orchestrator | Saturday 10 January 2026 15:05:04 +0000 (0:00:00.418) 0:00:04.419 ****** 2026-01-10 15:05:16.468532 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:05:16.468535 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:05:16.468539 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:05:16.468543 | orchestrator | 2026-01-10 15:05:16.468547 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-01-10 15:05:16.468550 | orchestrator | Saturday 10 January 2026 15:05:04 +0000 (0:00:00.287) 0:00:04.707 ****** 2026-01-10 15:05:16.468554 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:05:16.468558 | orchestrator | skipping: [testbed-node-1] 2026-01-10 15:05:16.468562 | orchestrator | skipping: [testbed-node-2] 2026-01-10 15:05:16.468566 | orchestrator | 2026-01-10 15:05:16.468570 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-01-10 15:05:16.468573 | orchestrator | Saturday 10 January 2026 15:05:05 +0000 (0:00:00.256) 0:00:04.963 ****** 2026-01-10 15:05:16.468577 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:05:16.468581 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:05:16.468585 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:05:16.468588 | orchestrator | 2026-01-10 15:05:16.468592 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-10 15:05:16.468596 | orchestrator | Saturday 10 January 2026 15:05:05 +0000 (0:00:00.449) 0:00:05.412 ****** 2026-01-10 15:05:16.468600 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:05:16.468604 | orchestrator | 2026-01-10 15:05:16.468608 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-10 15:05:16.468612 | orchestrator | Saturday 10 January 2026 15:05:05 +0000 (0:00:00.266) 0:00:05.679 ****** 2026-01-10 15:05:16.468616 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:05:16.468620 | orchestrator | 2026-01-10 15:05:16.468623 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-10 15:05:16.468627 | orchestrator | Saturday 10 January 2026 15:05:06 +0000 (0:00:00.278) 0:00:05.957 ****** 2026-01-10 15:05:16.468631 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:05:16.468635 | orchestrator | 2026-01-10 15:05:16.468638 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:05:16.468647 | orchestrator | Saturday 10 January 2026 15:05:06 +0000 (0:00:00.248) 0:00:06.205 ****** 2026-01-10 15:05:16.468651 | orchestrator | 2026-01-10 15:05:16.468675 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:05:16.468679 | orchestrator | Saturday 10 January 2026 15:05:06 +0000 (0:00:00.091) 0:00:06.297 ****** 2026-01-10 15:05:16.468683 | orchestrator | 2026-01-10 15:05:16.468687 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:05:16.468691 | orchestrator | Saturday 10 January 2026 15:05:06 +0000 (0:00:00.068) 0:00:06.366 ****** 2026-01-10 15:05:16.468694 | orchestrator | 2026-01-10 15:05:16.468698 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-10 15:05:16.468702 | orchestrator | Saturday 10 January 2026 15:05:06 +0000 (0:00:00.074) 0:00:06.441 ****** 2026-01-10 15:05:16.468706 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:05:16.468729 | orchestrator | 2026-01-10 15:05:16.468734 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-01-10 15:05:16.468738 | orchestrator | Saturday 10 January 2026 15:05:06 +0000 (0:00:00.270) 0:00:06.712 ****** 2026-01-10 15:05:16.468742 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:05:16.468747 | orchestrator | 2026-01-10 15:05:16.468767 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-01-10 15:05:16.468772 | orchestrator | Saturday 10 January 2026 15:05:07 +0000 (0:00:00.306) 0:00:07.019 ****** 2026-01-10 15:05:16.468776 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:05:16.468781 | orchestrator | 2026-01-10 15:05:16.468785 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-01-10 15:05:16.468789 | orchestrator | Saturday 10 January 2026 15:05:07 +0000 (0:00:00.140) 0:00:07.159 ****** 2026-01-10 15:05:16.468794 | orchestrator | changed: [testbed-node-0] 2026-01-10 15:05:16.468798 | orchestrator | 2026-01-10 15:05:16.468803 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-01-10 15:05:16.468807 | orchestrator | Saturday 10 January 2026 15:05:09 +0000 (0:00:01.705) 0:00:08.865 ****** 2026-01-10 15:05:16.468811 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:05:16.468815 | orchestrator | 2026-01-10 15:05:16.468820 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-01-10 15:05:16.468824 | orchestrator | Saturday 10 January 2026 15:05:09 +0000 (0:00:00.503) 0:00:09.368 ****** 2026-01-10 15:05:16.468829 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:05:16.468833 | orchestrator | 2026-01-10 15:05:16.468837 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-01-10 15:05:16.468842 | orchestrator | Saturday 10 January 2026 15:05:09 +0000 (0:00:00.134) 0:00:09.503 ****** 2026-01-10 15:05:16.468846 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:05:16.468850 | orchestrator | 2026-01-10 15:05:16.468855 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-01-10 15:05:16.468859 | orchestrator | Saturday 10 January 2026 15:05:10 +0000 (0:00:00.397) 0:00:09.901 ****** 2026-01-10 15:05:16.468863 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:05:16.468868 | orchestrator | 2026-01-10 15:05:16.468872 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-01-10 15:05:16.468876 | orchestrator | Saturday 10 January 2026 15:05:10 +0000 (0:00:00.355) 0:00:10.256 ****** 2026-01-10 15:05:16.468881 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:05:16.468885 | orchestrator | 2026-01-10 15:05:16.468890 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-01-10 15:05:16.468894 | orchestrator | Saturday 10 January 2026 15:05:10 +0000 (0:00:00.129) 0:00:10.386 ****** 2026-01-10 15:05:16.468898 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:05:16.468903 | orchestrator | 2026-01-10 15:05:16.468907 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-01-10 15:05:16.468912 | orchestrator | Saturday 10 January 2026 15:05:10 +0000 (0:00:00.122) 0:00:10.508 ****** 2026-01-10 15:05:16.468916 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:05:16.468925 | orchestrator | 2026-01-10 15:05:16.468930 | orchestrator | TASK [Gather status data] ****************************************************** 2026-01-10 15:05:16.468934 | orchestrator | Saturday 10 January 2026 15:05:10 +0000 (0:00:00.128) 0:00:10.636 ****** 2026-01-10 15:05:16.468938 | orchestrator | changed: [testbed-node-0] 2026-01-10 15:05:16.468942 | orchestrator | 2026-01-10 15:05:16.468947 | orchestrator | TASK [Set health test data] **************************************************** 2026-01-10 15:05:16.468952 | orchestrator | Saturday 10 January 2026 15:05:12 +0000 (0:00:01.453) 0:00:12.090 ****** 2026-01-10 15:05:16.468956 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:05:16.468960 | orchestrator | 2026-01-10 15:05:16.468965 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-01-10 15:05:16.468978 | orchestrator | Saturday 10 January 2026 15:05:12 +0000 (0:00:00.310) 0:00:12.400 ****** 2026-01-10 15:05:16.468983 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:05:16.468993 | orchestrator | 2026-01-10 15:05:16.468998 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-01-10 15:05:16.469002 | orchestrator | Saturday 10 January 2026 15:05:12 +0000 (0:00:00.141) 0:00:12.542 ****** 2026-01-10 15:05:16.469007 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:05:16.469011 | orchestrator | 2026-01-10 15:05:16.469020 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-01-10 15:05:16.469024 | orchestrator | Saturday 10 January 2026 15:05:12 +0000 (0:00:00.146) 0:00:12.688 ****** 2026-01-10 15:05:16.469029 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:05:16.469033 | orchestrator | 2026-01-10 15:05:16.469037 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-01-10 15:05:16.469042 | orchestrator | Saturday 10 January 2026 15:05:13 +0000 (0:00:00.171) 0:00:12.860 ****** 2026-01-10 15:05:16.469046 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:05:16.469050 | orchestrator | 2026-01-10 15:05:16.469055 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-01-10 15:05:16.469059 | orchestrator | Saturday 10 January 2026 15:05:13 +0000 (0:00:00.338) 0:00:13.199 ****** 2026-01-10 15:05:16.469064 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:05:16.469068 | orchestrator | 2026-01-10 15:05:16.469072 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-01-10 15:05:16.469077 | orchestrator | Saturday 10 January 2026 15:05:13 +0000 (0:00:00.264) 0:00:13.463 ****** 2026-01-10 15:05:16.469081 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:05:16.469086 | orchestrator | 2026-01-10 15:05:16.469090 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-10 15:05:16.469097 | orchestrator | Saturday 10 January 2026 15:05:13 +0000 (0:00:00.277) 0:00:13.741 ****** 2026-01-10 15:05:16.469103 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:05:16.469110 | orchestrator | 2026-01-10 15:05:16.469116 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-10 15:05:16.469127 | orchestrator | Saturday 10 January 2026 15:05:15 +0000 (0:00:01.756) 0:00:15.497 ****** 2026-01-10 15:05:16.469133 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:05:16.469139 | orchestrator | 2026-01-10 15:05:16.469146 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-10 15:05:16.469152 | orchestrator | Saturday 10 January 2026 15:05:15 +0000 (0:00:00.277) 0:00:15.774 ****** 2026-01-10 15:05:16.469157 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:05:16.469163 | orchestrator | 2026-01-10 15:05:16.469175 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:05:19.428531 | orchestrator | Saturday 10 January 2026 15:05:16 +0000 (0:00:00.282) 0:00:16.056 ****** 2026-01-10 15:05:19.428656 | orchestrator | 2026-01-10 15:05:19.428667 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:05:19.428675 | orchestrator | Saturday 10 January 2026 15:05:16 +0000 (0:00:00.076) 0:00:16.133 ****** 2026-01-10 15:05:19.428806 | orchestrator | 2026-01-10 15:05:19.428818 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:05:19.428825 | orchestrator | Saturday 10 January 2026 15:05:16 +0000 (0:00:00.075) 0:00:16.208 ****** 2026-01-10 15:05:19.428831 | orchestrator | 2026-01-10 15:05:19.428838 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-01-10 15:05:19.428844 | orchestrator | Saturday 10 January 2026 15:05:16 +0000 (0:00:00.092) 0:00:16.301 ****** 2026-01-10 15:05:19.428852 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:05:19.428859 | orchestrator | 2026-01-10 15:05:19.428865 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-10 15:05:19.428871 | orchestrator | Saturday 10 January 2026 15:05:18 +0000 (0:00:01.546) 0:00:17.848 ****** 2026-01-10 15:05:19.428877 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-01-10 15:05:19.428884 | orchestrator |  "msg": [ 2026-01-10 15:05:19.428893 | orchestrator |  "Validator run completed.", 2026-01-10 15:05:19.428901 | orchestrator |  "You can find the report file here:", 2026-01-10 15:05:19.428908 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-01-10T15:05:01+00:00-report.json", 2026-01-10 15:05:19.428916 | orchestrator |  "on the following host:", 2026-01-10 15:05:19.428923 | orchestrator |  "testbed-manager" 2026-01-10 15:05:19.428930 | orchestrator |  ] 2026-01-10 15:05:19.428937 | orchestrator | } 2026-01-10 15:05:19.428944 | orchestrator | 2026-01-10 15:05:19.428951 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 15:05:19.428959 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-01-10 15:05:19.428969 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 15:05:19.428976 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 15:05:19.428983 | orchestrator | 2026-01-10 15:05:19.428989 | orchestrator | 2026-01-10 15:05:19.428996 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 15:05:19.429002 | orchestrator | Saturday 10 January 2026 15:05:18 +0000 (0:00:00.912) 0:00:18.760 ****** 2026-01-10 15:05:19.429008 | orchestrator | =============================================================================== 2026-01-10 15:05:19.429015 | orchestrator | Aggregate test results step one ----------------------------------------- 1.76s 2026-01-10 15:05:19.429021 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.71s 2026-01-10 15:05:19.429028 | orchestrator | Write report file ------------------------------------------------------- 1.55s 2026-01-10 15:05:19.429034 | orchestrator | Gather status data ------------------------------------------------------ 1.45s 2026-01-10 15:05:19.429040 | orchestrator | Create report output directory ------------------------------------------ 1.00s 2026-01-10 15:05:19.429047 | orchestrator | Get container info ------------------------------------------------------ 0.95s 2026-01-10 15:05:19.429054 | orchestrator | Print report file information ------------------------------------------- 0.91s 2026-01-10 15:05:19.429060 | orchestrator | Get timestamp for report file ------------------------------------------- 0.86s 2026-01-10 15:05:19.429067 | orchestrator | Set quorum test data ---------------------------------------------------- 0.50s 2026-01-10 15:05:19.429074 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.45s 2026-01-10 15:05:19.429080 | orchestrator | Set test result to passed if container is existing ---------------------- 0.42s 2026-01-10 15:05:19.429087 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.40s 2026-01-10 15:05:19.429094 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.36s 2026-01-10 15:05:19.429100 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.34s 2026-01-10 15:05:19.429117 | orchestrator | Prepare test data for container existance test -------------------------- 0.32s 2026-01-10 15:05:19.429124 | orchestrator | Set health test data ---------------------------------------------------- 0.31s 2026-01-10 15:05:19.429130 | orchestrator | Fail due to missing containers ------------------------------------------ 0.31s 2026-01-10 15:05:19.429137 | orchestrator | Prepare test data ------------------------------------------------------- 0.29s 2026-01-10 15:05:19.429143 | orchestrator | Aggregate test results step three --------------------------------------- 0.28s 2026-01-10 15:05:19.429150 | orchestrator | Aggregate test results step two ----------------------------------------- 0.28s 2026-01-10 15:05:19.846997 | orchestrator | + osism validate ceph-mgrs 2026-01-10 15:05:51.661402 | orchestrator | 2026-01-10 15:05:51.661510 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-01-10 15:05:51.661523 | orchestrator | 2026-01-10 15:05:51.661530 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-01-10 15:05:51.661537 | orchestrator | Saturday 10 January 2026 15:05:36 +0000 (0:00:00.459) 0:00:00.459 ****** 2026-01-10 15:05:51.661544 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:05:51.661551 | orchestrator | 2026-01-10 15:05:51.661557 | orchestrator | TASK [Create report output directory] ****************************************** 2026-01-10 15:05:51.661563 | orchestrator | Saturday 10 January 2026 15:05:37 +0000 (0:00:00.847) 0:00:01.306 ****** 2026-01-10 15:05:51.661569 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:05:51.661575 | orchestrator | 2026-01-10 15:05:51.661580 | orchestrator | TASK [Define report vars] ****************************************************** 2026-01-10 15:05:51.661587 | orchestrator | Saturday 10 January 2026 15:05:38 +0000 (0:00:01.020) 0:00:02.327 ****** 2026-01-10 15:05:51.661593 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:05:51.661600 | orchestrator | 2026-01-10 15:05:51.661606 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-01-10 15:05:51.661612 | orchestrator | Saturday 10 January 2026 15:05:38 +0000 (0:00:00.159) 0:00:02.487 ****** 2026-01-10 15:05:51.661619 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:05:51.661625 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:05:51.661632 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:05:51.661638 | orchestrator | 2026-01-10 15:05:51.661645 | orchestrator | TASK [Get container info] ****************************************************** 2026-01-10 15:05:51.661652 | orchestrator | Saturday 10 January 2026 15:05:38 +0000 (0:00:00.288) 0:00:02.776 ****** 2026-01-10 15:05:51.661658 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:05:51.661664 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:05:51.661670 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:05:51.661676 | orchestrator | 2026-01-10 15:05:51.661683 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-01-10 15:05:51.661689 | orchestrator | Saturday 10 January 2026 15:05:40 +0000 (0:00:01.115) 0:00:03.892 ****** 2026-01-10 15:05:51.661696 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:05:51.661703 | orchestrator | skipping: [testbed-node-1] 2026-01-10 15:05:51.661709 | orchestrator | skipping: [testbed-node-2] 2026-01-10 15:05:51.661715 | orchestrator | 2026-01-10 15:05:51.661722 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-01-10 15:05:51.661728 | orchestrator | Saturday 10 January 2026 15:05:40 +0000 (0:00:00.293) 0:00:04.185 ****** 2026-01-10 15:05:51.661734 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:05:51.661741 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:05:51.661747 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:05:51.661841 | orchestrator | 2026-01-10 15:05:51.661853 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-10 15:05:51.661861 | orchestrator | Saturday 10 January 2026 15:05:40 +0000 (0:00:00.526) 0:00:04.711 ****** 2026-01-10 15:05:51.661867 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:05:51.661873 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:05:51.661880 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:05:51.661910 | orchestrator | 2026-01-10 15:05:51.661918 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-01-10 15:05:51.661923 | orchestrator | Saturday 10 January 2026 15:05:41 +0000 (0:00:00.323) 0:00:05.035 ****** 2026-01-10 15:05:51.661927 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:05:51.661933 | orchestrator | skipping: [testbed-node-1] 2026-01-10 15:05:51.661939 | orchestrator | skipping: [testbed-node-2] 2026-01-10 15:05:51.661945 | orchestrator | 2026-01-10 15:05:51.661950 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-01-10 15:05:51.661956 | orchestrator | Saturday 10 January 2026 15:05:41 +0000 (0:00:00.315) 0:00:05.350 ****** 2026-01-10 15:05:51.661961 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:05:51.661967 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:05:51.661990 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:05:51.661996 | orchestrator | 2026-01-10 15:05:51.662002 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-10 15:05:51.662009 | orchestrator | Saturday 10 January 2026 15:05:42 +0000 (0:00:00.470) 0:00:05.821 ****** 2026-01-10 15:05:51.662067 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:05:51.662074 | orchestrator | 2026-01-10 15:05:51.662081 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-10 15:05:51.662087 | orchestrator | Saturday 10 January 2026 15:05:42 +0000 (0:00:00.267) 0:00:06.089 ****** 2026-01-10 15:05:51.662093 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:05:51.662100 | orchestrator | 2026-01-10 15:05:51.662109 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-10 15:05:51.662113 | orchestrator | Saturday 10 January 2026 15:05:42 +0000 (0:00:00.260) 0:00:06.349 ****** 2026-01-10 15:05:51.662116 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:05:51.662120 | orchestrator | 2026-01-10 15:05:51.662124 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:05:51.662128 | orchestrator | Saturday 10 January 2026 15:05:42 +0000 (0:00:00.289) 0:00:06.639 ****** 2026-01-10 15:05:51.662132 | orchestrator | 2026-01-10 15:05:51.662135 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:05:51.662139 | orchestrator | Saturday 10 January 2026 15:05:42 +0000 (0:00:00.070) 0:00:06.710 ****** 2026-01-10 15:05:51.662143 | orchestrator | 2026-01-10 15:05:51.662147 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:05:51.662151 | orchestrator | Saturday 10 January 2026 15:05:42 +0000 (0:00:00.071) 0:00:06.781 ****** 2026-01-10 15:05:51.662154 | orchestrator | 2026-01-10 15:05:51.662158 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-10 15:05:51.662162 | orchestrator | Saturday 10 January 2026 15:05:43 +0000 (0:00:00.084) 0:00:06.866 ****** 2026-01-10 15:05:51.662166 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:05:51.662170 | orchestrator | 2026-01-10 15:05:51.662174 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-01-10 15:05:51.662177 | orchestrator | Saturday 10 January 2026 15:05:43 +0000 (0:00:00.257) 0:00:07.123 ****** 2026-01-10 15:05:51.662181 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:05:51.662187 | orchestrator | 2026-01-10 15:05:51.662217 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-01-10 15:05:51.662226 | orchestrator | Saturday 10 January 2026 15:05:43 +0000 (0:00:00.252) 0:00:07.375 ****** 2026-01-10 15:05:51.662232 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:05:51.662238 | orchestrator | 2026-01-10 15:05:51.662244 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-01-10 15:05:51.662250 | orchestrator | Saturday 10 January 2026 15:05:43 +0000 (0:00:00.145) 0:00:07.520 ****** 2026-01-10 15:05:51.662255 | orchestrator | changed: [testbed-node-0] 2026-01-10 15:05:51.662261 | orchestrator | 2026-01-10 15:05:51.662267 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-01-10 15:05:51.662273 | orchestrator | Saturday 10 January 2026 15:05:45 +0000 (0:00:02.151) 0:00:09.671 ****** 2026-01-10 15:05:51.662288 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:05:51.662294 | orchestrator | 2026-01-10 15:05:51.662301 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-01-10 15:05:51.662306 | orchestrator | Saturday 10 January 2026 15:05:46 +0000 (0:00:00.524) 0:00:10.196 ****** 2026-01-10 15:05:51.662312 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:05:51.662318 | orchestrator | 2026-01-10 15:05:51.662324 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-01-10 15:05:51.662327 | orchestrator | Saturday 10 January 2026 15:05:46 +0000 (0:00:00.326) 0:00:10.523 ****** 2026-01-10 15:05:51.662331 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:05:51.662337 | orchestrator | 2026-01-10 15:05:51.662343 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-01-10 15:05:51.662349 | orchestrator | Saturday 10 January 2026 15:05:46 +0000 (0:00:00.155) 0:00:10.678 ****** 2026-01-10 15:05:51.662355 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:05:51.662360 | orchestrator | 2026-01-10 15:05:51.662367 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-01-10 15:05:51.662372 | orchestrator | Saturday 10 January 2026 15:05:47 +0000 (0:00:00.156) 0:00:10.834 ****** 2026-01-10 15:05:51.662379 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:05:51.662461 | orchestrator | 2026-01-10 15:05:51.662468 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-01-10 15:05:51.662476 | orchestrator | Saturday 10 January 2026 15:05:47 +0000 (0:00:00.270) 0:00:11.105 ****** 2026-01-10 15:05:51.662482 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:05:51.662486 | orchestrator | 2026-01-10 15:05:51.662490 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-10 15:05:51.662494 | orchestrator | Saturday 10 January 2026 15:05:47 +0000 (0:00:00.240) 0:00:11.345 ****** 2026-01-10 15:05:51.662499 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:05:51.662503 | orchestrator | 2026-01-10 15:05:51.662508 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-10 15:05:51.662512 | orchestrator | Saturday 10 January 2026 15:05:48 +0000 (0:00:01.289) 0:00:12.635 ****** 2026-01-10 15:05:51.662517 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:05:51.662521 | orchestrator | 2026-01-10 15:05:51.662525 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-10 15:05:51.662530 | orchestrator | Saturday 10 January 2026 15:05:49 +0000 (0:00:00.274) 0:00:12.909 ****** 2026-01-10 15:05:51.662534 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:05:51.662538 | orchestrator | 2026-01-10 15:05:51.662542 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:05:51.662547 | orchestrator | Saturday 10 January 2026 15:05:49 +0000 (0:00:00.239) 0:00:13.149 ****** 2026-01-10 15:05:51.662551 | orchestrator | 2026-01-10 15:05:51.662555 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:05:51.662560 | orchestrator | Saturday 10 January 2026 15:05:49 +0000 (0:00:00.084) 0:00:13.234 ****** 2026-01-10 15:05:51.662564 | orchestrator | 2026-01-10 15:05:51.662568 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:05:51.662572 | orchestrator | Saturday 10 January 2026 15:05:49 +0000 (0:00:00.069) 0:00:13.303 ****** 2026-01-10 15:05:51.662576 | orchestrator | 2026-01-10 15:05:51.662581 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-01-10 15:05:51.662585 | orchestrator | Saturday 10 January 2026 15:05:49 +0000 (0:00:00.266) 0:00:13.569 ****** 2026-01-10 15:05:51.662590 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:05:51.662594 | orchestrator | 2026-01-10 15:05:51.662604 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-10 15:05:51.662609 | orchestrator | Saturday 10 January 2026 15:05:51 +0000 (0:00:01.464) 0:00:15.034 ****** 2026-01-10 15:05:51.662613 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-01-10 15:05:51.662624 | orchestrator |  "msg": [ 2026-01-10 15:05:51.662629 | orchestrator |  "Validator run completed.", 2026-01-10 15:05:51.662634 | orchestrator |  "You can find the report file here:", 2026-01-10 15:05:51.662638 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-01-10T15:05:37+00:00-report.json", 2026-01-10 15:05:51.662644 | orchestrator |  "on the following host:", 2026-01-10 15:05:51.662649 | orchestrator |  "testbed-manager" 2026-01-10 15:05:51.662653 | orchestrator |  ] 2026-01-10 15:05:51.662658 | orchestrator | } 2026-01-10 15:05:51.662663 | orchestrator | 2026-01-10 15:05:51.662668 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 15:05:51.662674 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-10 15:05:51.662680 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 15:05:51.662693 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 15:05:51.894638 | orchestrator | 2026-01-10 15:05:51.894724 | orchestrator | 2026-01-10 15:05:51.894731 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 15:05:51.894738 | orchestrator | Saturday 10 January 2026 15:05:51 +0000 (0:00:00.421) 0:00:15.455 ****** 2026-01-10 15:05:51.894743 | orchestrator | =============================================================================== 2026-01-10 15:05:51.894748 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.15s 2026-01-10 15:05:51.894752 | orchestrator | Write report file ------------------------------------------------------- 1.46s 2026-01-10 15:05:51.894778 | orchestrator | Aggregate test results step one ----------------------------------------- 1.29s 2026-01-10 15:05:51.894784 | orchestrator | Get container info ------------------------------------------------------ 1.12s 2026-01-10 15:05:51.894788 | orchestrator | Create report output directory ------------------------------------------ 1.02s 2026-01-10 15:05:51.894793 | orchestrator | Get timestamp for report file ------------------------------------------- 0.85s 2026-01-10 15:05:51.894798 | orchestrator | Set test result to passed if container is existing ---------------------- 0.53s 2026-01-10 15:05:51.894802 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.52s 2026-01-10 15:05:51.894807 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.47s 2026-01-10 15:05:51.894811 | orchestrator | Print report file information ------------------------------------------- 0.42s 2026-01-10 15:05:51.894815 | orchestrator | Flush handlers ---------------------------------------------------------- 0.42s 2026-01-10 15:05:51.894820 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.33s 2026-01-10 15:05:51.894824 | orchestrator | Prepare test data ------------------------------------------------------- 0.32s 2026-01-10 15:05:51.894828 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.32s 2026-01-10 15:05:51.894833 | orchestrator | Set test result to failed if container is missing ----------------------- 0.29s 2026-01-10 15:05:51.894837 | orchestrator | Aggregate test results step three --------------------------------------- 0.29s 2026-01-10 15:05:51.894842 | orchestrator | Prepare test data for container existance test -------------------------- 0.29s 2026-01-10 15:05:51.894846 | orchestrator | Aggregate test results step two ----------------------------------------- 0.27s 2026-01-10 15:05:51.894850 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.27s 2026-01-10 15:05:51.894855 | orchestrator | Aggregate test results step one ----------------------------------------- 0.27s 2026-01-10 15:05:52.133168 | orchestrator | + osism validate ceph-osds 2026-01-10 15:06:13.467775 | orchestrator | 2026-01-10 15:06:13.467898 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-01-10 15:06:13.467906 | orchestrator | 2026-01-10 15:06:13.467927 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-01-10 15:06:13.467931 | orchestrator | Saturday 10 January 2026 15:06:08 +0000 (0:00:00.423) 0:00:00.423 ****** 2026-01-10 15:06:13.467936 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-10 15:06:13.467941 | orchestrator | 2026-01-10 15:06:13.467945 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-10 15:06:13.467948 | orchestrator | Saturday 10 January 2026 15:06:09 +0000 (0:00:00.841) 0:00:01.265 ****** 2026-01-10 15:06:13.467953 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-10 15:06:13.467957 | orchestrator | 2026-01-10 15:06:13.467960 | orchestrator | TASK [Create report output directory] ****************************************** 2026-01-10 15:06:13.467964 | orchestrator | Saturday 10 January 2026 15:06:10 +0000 (0:00:00.555) 0:00:01.821 ****** 2026-01-10 15:06:13.467968 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-10 15:06:13.467972 | orchestrator | 2026-01-10 15:06:13.467975 | orchestrator | TASK [Define report vars] ****************************************************** 2026-01-10 15:06:13.467979 | orchestrator | Saturday 10 January 2026 15:06:10 +0000 (0:00:00.769) 0:00:02.591 ****** 2026-01-10 15:06:13.467983 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:06:13.467988 | orchestrator | 2026-01-10 15:06:13.467991 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-01-10 15:06:13.467996 | orchestrator | Saturday 10 January 2026 15:06:11 +0000 (0:00:00.134) 0:00:02.725 ****** 2026-01-10 15:06:13.468000 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:06:13.468003 | orchestrator | 2026-01-10 15:06:13.468007 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-01-10 15:06:13.468011 | orchestrator | Saturday 10 January 2026 15:06:11 +0000 (0:00:00.131) 0:00:02.856 ****** 2026-01-10 15:06:13.468015 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:06:13.468019 | orchestrator | skipping: [testbed-node-4] 2026-01-10 15:06:13.468022 | orchestrator | skipping: [testbed-node-5] 2026-01-10 15:06:13.468026 | orchestrator | 2026-01-10 15:06:13.468030 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-01-10 15:06:13.468034 | orchestrator | Saturday 10 January 2026 15:06:11 +0000 (0:00:00.340) 0:00:03.196 ****** 2026-01-10 15:06:13.468037 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:06:13.468041 | orchestrator | 2026-01-10 15:06:13.468045 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-01-10 15:06:13.468048 | orchestrator | Saturday 10 January 2026 15:06:11 +0000 (0:00:00.150) 0:00:03.347 ****** 2026-01-10 15:06:13.468052 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:06:13.468056 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:06:13.468060 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:06:13.468063 | orchestrator | 2026-01-10 15:06:13.468067 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-01-10 15:06:13.468071 | orchestrator | Saturday 10 January 2026 15:06:12 +0000 (0:00:00.345) 0:00:03.692 ****** 2026-01-10 15:06:13.468075 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:06:13.468078 | orchestrator | 2026-01-10 15:06:13.468082 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-10 15:06:13.468086 | orchestrator | Saturday 10 January 2026 15:06:12 +0000 (0:00:00.619) 0:00:04.312 ****** 2026-01-10 15:06:13.468090 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:06:13.468093 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:06:13.468097 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:06:13.468101 | orchestrator | 2026-01-10 15:06:13.468105 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-01-10 15:06:13.468108 | orchestrator | Saturday 10 January 2026 15:06:13 +0000 (0:00:00.530) 0:00:04.842 ****** 2026-01-10 15:06:13.468114 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'cc60e5b37475a3db68173c57df3bc8b79f1fb169ada94f64e192b437695e6c09', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-01-10 15:06:13.468125 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f5127dc8aaaaea96aa2acda269f029ebeba4563212dcef545f17a27b3e5c3ceb', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-01-10 15:06:13.468129 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4b2e70018947aadc7501202443aeedeec582715b34b734eeefd45b7db9fb4ddd', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-01-10 15:06:13.468135 | orchestrator | skipping: [testbed-node-3] => (item={'id': '13c09193c584dd4e390b2813774ed4147a32690a1a590eec1934397694ca9919', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2026-01-10 15:06:13.468141 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5e6128f6c15e087709c8d4dad58aec699a20deada2a404bd41da6a17e573a690', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 12 minutes'})  2026-01-10 15:06:13.468155 | orchestrator | skipping: [testbed-node-3] => (item={'id': '305bbcc565b81841a464e99e84035358d5da2532ba7190b36c8608b61d9f67d4', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2026-01-10 15:06:13.468160 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'cc9f2fc9baf2ab93c5a46c898d09fefbb0001484d38ee999155ee86c1a225b9b', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2026-01-10 15:06:13.468164 | orchestrator | skipping: [testbed-node-3] => (item={'id': '091c43487a6c8dd7fb1048a4138e5dbf01bdc560929193bee995293eb3e252a0', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2026-01-10 15:06:13.468181 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c2ea99e5f90dca43086161c50585001737f9c9bc4b665a47f7089ef3883299fb', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 23 minutes'})  2026-01-10 15:06:13.468189 | orchestrator | skipping: [testbed-node-3] => (item={'id': '414198ed09f14a5367c0ba0e82725bff497fcdd762a65f3917439e10c04f89cc', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 24 minutes'})  2026-01-10 15:06:13.468194 | orchestrator | ok: [testbed-node-3] => (item={'id': 'f2e7e269960dc10d8864832755b96396b41ab1bef40e4e6731637b796067caf3', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-01-10 15:06:13.468199 | orchestrator | ok: [testbed-node-3] => (item={'id': '41d64b487422569236e9ce3c318020c1c16fb2e13dd0a3ee7ac039df9d2ecfc0', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-01-10 15:06:13.468203 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6cd05df3e0aa5918ef1bb52b06fee1e094888ba3f9b79fc3a7b73d7bcfc7c4f5', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2026-01-10 15:06:13.468207 | orchestrator | skipping: [testbed-node-3] => (item={'id': '72883728ff7ace88eca1919eaafb94c94dbea3e4847f19d818c0c0739db183a2', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-01-10 15:06:13.468211 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c995296bc963996f9f4cecb4fbbc48901e76ec71921391394c0aba4b02bc404d', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-01-10 15:06:13.468219 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1cde755c562a1a2b3e78981838fca953a4ba2b15c41f00b482fb531d9372924d', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2026-01-10 15:06:13.468223 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd778fa7604e40d2a7fe2a959e785b3f509314e6e3c9b0500a14471ef822f1778', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2026-01-10 15:06:13.468227 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9de60b85812a99f3832798a64dd140a834cb21119fa7b9989359eecf97d9f91c', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2026-01-10 15:06:13.468231 | orchestrator | skipping: [testbed-node-4] => (item={'id': '841f0d1e19ac7141596548862e1f2e7783fbb51ee29c46f9f0b8492621959c49', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-01-10 15:06:13.468235 | orchestrator | skipping: [testbed-node-4] => (item={'id': '95772aba5eeafa2c562258f32c16523cd23e08fb70e913e65b8c90ed7984e072', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-01-10 15:06:13.468241 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c8b6b77e285d60f3563dc0ab38fc0c3fe0f727645beae50b8e2ddf37c7e32404', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-01-10 15:06:13.736645 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'df2f1c8dabd7ce710e2fc5150a9d8bda27f10cc0bf1f7f6ac8983c414257926d', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2026-01-10 15:06:13.736715 | orchestrator | skipping: [testbed-node-4] => (item={'id': '22b6daa6f4303bcc638d6817d4bdc5f34d613d466ee61f7695f9eb486e0b31d2', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 12 minutes'})  2026-01-10 15:06:13.736722 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5ac174ddc3b2f6e2f8c2c8cf77f5edb31ae635fb6d4e17ac3e0fbcdf0bcf61ad', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2026-01-10 15:06:13.736738 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1390299defeff015379b7f68da874f629276351bfb56a4168831c7028f046054', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2026-01-10 15:06:13.736742 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5abdd5c7d01840979a0b3c99f54d7cc4d66628777f5710f6adbf539e0e4eed56', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2026-01-10 15:06:13.736747 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'bbfe35e1d1cdc884d1e47cc696f00bb975e3ac90602203d5a6761e1db8bce54b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2026-01-10 15:06:13.736752 | orchestrator | skipping: [testbed-node-4] => (item={'id': '07f22a1d8b56aa81956b76bf7f29703fc12c818c7c969161791d83ae3ec17b60', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 24 minutes'})  2026-01-10 15:06:13.736776 | orchestrator | ok: [testbed-node-4] => (item={'id': '8b94f84db5e23766d5a9ef648871d0d5eb14709fbda3827450d51d000319121b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-01-10 15:06:13.736841 | orchestrator | ok: [testbed-node-4] => (item={'id': '2fafb6f4bc785d96eaf261df2514497ea989c4db8dd0c5ba97106377e91fc3d7', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-01-10 15:06:13.736849 | orchestrator | skipping: [testbed-node-4] => (item={'id': '013227f5c2a21e22e9bad05c6a16b2372d3f68893a6fdd4ea240681a83e1aa20', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2026-01-10 15:06:13.736854 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f3f8b8c62d06e7a6221075a12f05f03dcf7d616762b5cd1c44ce04dfe95f5548', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-01-10 15:06:13.736861 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8c8a9aa3b4b548f01d0a43669d79c57572b3c016b7ae7b5c7014237cc7f4d602', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-01-10 15:06:13.736867 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2a674c046ee5dce736d13fce9308f6ae4bff2f9a7bf4d8ed906d1c4432369a55', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2026-01-10 15:06:13.736873 | orchestrator | skipping: [testbed-node-4] => (item={'id': '67677e515821805d603efdf71c85cdfc1939927b79448c8fe261d198e5ba0de0', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2026-01-10 15:06:13.736895 | orchestrator | skipping: [testbed-node-4] => (item={'id': '95afbeae519bf0ab6a3d30a27ef07d929e94575d451e994aa1a6f42d4267133a', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2026-01-10 15:06:13.736902 | orchestrator | skipping: [testbed-node-5] => (item={'id': '20ee8a4e668966b8841c9124af73d60bd0ecc3eeb0c0836baf228f5119753843', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-01-10 15:06:13.736910 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c16d96e3a812eed31f4398efdbc427146dca69854b7596bdd29c500d6bd16335', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-01-10 15:06:13.736916 | orchestrator | skipping: [testbed-node-5] => (item={'id': '97a2a33bbc1db12578efd0c9e096e4519f65755c10e58409ecd28c9be0c5eea4', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-01-10 15:06:13.736927 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7769939cf45106b6b879696a5d60705a6d2aec15c9eb638e81c7a99967aa7d37', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2026-01-10 15:06:13.736934 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5fe36b39f35b8028ff324316518a7fafcec0fb0417439b531eb7d76db8f1bb41', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 12 minutes'})  2026-01-10 15:06:13.736940 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4af7ccf24afd9298af37e24657bc8f382ca5a69e981bc954e8175b0bdad58a91', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2026-01-10 15:06:13.736954 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'cbf9631fc43eb76acc891b0814f77f7fd0efa4c757538c41812ce267b4cc9ed2', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2026-01-10 15:06:13.736960 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd9fba4443638624384fd08fb52d406ded98338548ca6d8bb4ad7e95b7bc7f214', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2026-01-10 15:06:13.736965 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2c8492fc459656905ad8d67c0a5de9eadf9db0ec9268548fe0284af0d4cc947a', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2026-01-10 15:06:13.736972 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fdbbf5987b6ef1df9710ed9e03c12aeb192f0bb771713600514663fb45f33d90', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 24 minutes'})  2026-01-10 15:06:13.736977 | orchestrator | ok: [testbed-node-5] => (item={'id': '5c5ab6ec07e6b31f3033a9a7257e9f970ccd306339ce86627c17728839b0152e', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-01-10 15:06:13.736983 | orchestrator | ok: [testbed-node-5] => (item={'id': '12267e9bb098384469693c343999d3b1ad622614ffebe9a9e905cb164813d446', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-01-10 15:06:13.736990 | orchestrator | skipping: [testbed-node-5] => (item={'id': '375a213e6a3901cff9965857abcaec4394ec07fca259685b3444ff78d1d03d3c', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2026-01-10 15:06:13.736997 | orchestrator | skipping: [testbed-node-5] => (item={'id': '542097646ae3e32ef365f98387139922fca1eaa2a93fd7125e545e5eb5da55be', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-01-10 15:06:13.737009 | orchestrator | skipping: [testbed-node-5] => (item={'id': '112baf2e40cce94a684d6289e87393df6fe42b43ce602cd728ea6386e5fda279', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-01-10 15:06:27.101281 | orchestrator | skipping: [testbed-node-5] => (item={'id': '32214399b8ec0944cabb615bca5eb72cde3baf182f4a3dd88175e72ba2ae727f', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2026-01-10 15:06:27.101371 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c50210e2ebf855ef5d6a111b22aa1da1153fad063ca271d663e7100bf10d13c8', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2026-01-10 15:06:27.101384 | orchestrator | skipping: [testbed-node-5] => (item={'id': '420a9c79a73d852d1af595ebf51b44ef8c6b705f90e16bcf2707daf773f75fe4', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2026-01-10 15:06:27.101390 | orchestrator | 2026-01-10 15:06:27.101398 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-01-10 15:06:27.101406 | orchestrator | Saturday 10 January 2026 15:06:13 +0000 (0:00:00.526) 0:00:05.369 ****** 2026-01-10 15:06:27.101432 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:06:27.101439 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:06:27.101445 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:06:27.101451 | orchestrator | 2026-01-10 15:06:27.101458 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-01-10 15:06:27.101464 | orchestrator | Saturday 10 January 2026 15:06:14 +0000 (0:00:00.324) 0:00:05.694 ****** 2026-01-10 15:06:27.101470 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:06:27.101479 | orchestrator | skipping: [testbed-node-4] 2026-01-10 15:06:27.101483 | orchestrator | skipping: [testbed-node-5] 2026-01-10 15:06:27.101487 | orchestrator | 2026-01-10 15:06:27.101491 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-01-10 15:06:27.101495 | orchestrator | Saturday 10 January 2026 15:06:14 +0000 (0:00:00.523) 0:00:06.217 ****** 2026-01-10 15:06:27.101500 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:06:27.101523 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:06:27.101529 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:06:27.101535 | orchestrator | 2026-01-10 15:06:27.101541 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-10 15:06:27.101547 | orchestrator | Saturday 10 January 2026 15:06:14 +0000 (0:00:00.325) 0:00:06.542 ****** 2026-01-10 15:06:27.101554 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:06:27.101560 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:06:27.101567 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:06:27.101573 | orchestrator | 2026-01-10 15:06:27.101578 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-01-10 15:06:27.101586 | orchestrator | Saturday 10 January 2026 15:06:15 +0000 (0:00:00.326) 0:00:06.868 ****** 2026-01-10 15:06:27.101590 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-01-10 15:06:27.101596 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-01-10 15:06:27.101600 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:06:27.101604 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-01-10 15:06:27.101644 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-01-10 15:06:27.101649 | orchestrator | skipping: [testbed-node-4] 2026-01-10 15:06:27.101654 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-01-10 15:06:27.101658 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-01-10 15:06:27.101662 | orchestrator | skipping: [testbed-node-5] 2026-01-10 15:06:27.101665 | orchestrator | 2026-01-10 15:06:27.101669 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-01-10 15:06:27.101673 | orchestrator | Saturday 10 January 2026 15:06:15 +0000 (0:00:00.359) 0:00:07.228 ****** 2026-01-10 15:06:27.101677 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:06:27.101681 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:06:27.101684 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:06:27.101698 | orchestrator | 2026-01-10 15:06:27.101702 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-01-10 15:06:27.101711 | orchestrator | Saturday 10 January 2026 15:06:16 +0000 (0:00:00.499) 0:00:07.727 ****** 2026-01-10 15:06:27.101715 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:06:27.101719 | orchestrator | skipping: [testbed-node-4] 2026-01-10 15:06:27.101723 | orchestrator | skipping: [testbed-node-5] 2026-01-10 15:06:27.101726 | orchestrator | 2026-01-10 15:06:27.101730 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-01-10 15:06:27.101734 | orchestrator | Saturday 10 January 2026 15:06:16 +0000 (0:00:00.312) 0:00:08.040 ****** 2026-01-10 15:06:27.101737 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:06:27.101741 | orchestrator | skipping: [testbed-node-4] 2026-01-10 15:06:27.101745 | orchestrator | skipping: [testbed-node-5] 2026-01-10 15:06:27.101755 | orchestrator | 2026-01-10 15:06:27.101758 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-01-10 15:06:27.101762 | orchestrator | Saturday 10 January 2026 15:06:16 +0000 (0:00:00.333) 0:00:08.373 ****** 2026-01-10 15:06:27.101766 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:06:27.101770 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:06:27.101773 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:06:27.101777 | orchestrator | 2026-01-10 15:06:27.101781 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-10 15:06:27.101787 | orchestrator | Saturday 10 January 2026 15:06:17 +0000 (0:00:00.303) 0:00:08.677 ****** 2026-01-10 15:06:27.101794 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:06:27.101813 | orchestrator | 2026-01-10 15:06:27.101835 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-10 15:06:27.101841 | orchestrator | Saturday 10 January 2026 15:06:17 +0000 (0:00:00.497) 0:00:09.174 ****** 2026-01-10 15:06:27.101847 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:06:27.101853 | orchestrator | 2026-01-10 15:06:27.101860 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-10 15:06:27.101866 | orchestrator | Saturday 10 January 2026 15:06:18 +0000 (0:00:00.687) 0:00:09.861 ****** 2026-01-10 15:06:27.101873 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:06:27.101879 | orchestrator | 2026-01-10 15:06:27.101886 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:06:27.101892 | orchestrator | Saturday 10 January 2026 15:06:18 +0000 (0:00:00.253) 0:00:10.114 ****** 2026-01-10 15:06:27.101898 | orchestrator | 2026-01-10 15:06:27.101905 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:06:27.101911 | orchestrator | Saturday 10 January 2026 15:06:18 +0000 (0:00:00.074) 0:00:10.189 ****** 2026-01-10 15:06:27.101917 | orchestrator | 2026-01-10 15:06:27.101924 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:06:27.101930 | orchestrator | Saturday 10 January 2026 15:06:18 +0000 (0:00:00.070) 0:00:10.260 ****** 2026-01-10 15:06:27.101936 | orchestrator | 2026-01-10 15:06:27.101947 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-10 15:06:27.101952 | orchestrator | Saturday 10 January 2026 15:06:18 +0000 (0:00:00.068) 0:00:10.329 ****** 2026-01-10 15:06:27.101956 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:06:27.101960 | orchestrator | 2026-01-10 15:06:27.101965 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-01-10 15:06:27.101969 | orchestrator | Saturday 10 January 2026 15:06:18 +0000 (0:00:00.266) 0:00:10.595 ****** 2026-01-10 15:06:27.101973 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:06:27.101978 | orchestrator | 2026-01-10 15:06:27.101982 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-10 15:06:27.101986 | orchestrator | Saturday 10 January 2026 15:06:19 +0000 (0:00:00.268) 0:00:10.864 ****** 2026-01-10 15:06:27.101991 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:06:27.101994 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:06:27.101998 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:06:27.102002 | orchestrator | 2026-01-10 15:06:27.102005 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-01-10 15:06:27.102009 | orchestrator | Saturday 10 January 2026 15:06:19 +0000 (0:00:00.306) 0:00:11.170 ****** 2026-01-10 15:06:27.102057 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:06:27.102062 | orchestrator | 2026-01-10 15:06:27.102065 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-01-10 15:06:27.102069 | orchestrator | Saturday 10 January 2026 15:06:19 +0000 (0:00:00.230) 0:00:11.401 ****** 2026-01-10 15:06:27.102073 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-10 15:06:27.102077 | orchestrator | 2026-01-10 15:06:27.102080 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-01-10 15:06:27.102084 | orchestrator | Saturday 10 January 2026 15:06:21 +0000 (0:00:02.123) 0:00:13.525 ****** 2026-01-10 15:06:27.102094 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:06:27.102098 | orchestrator | 2026-01-10 15:06:27.102101 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-01-10 15:06:27.102106 | orchestrator | Saturday 10 January 2026 15:06:22 +0000 (0:00:00.129) 0:00:13.654 ****** 2026-01-10 15:06:27.102112 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:06:27.102118 | orchestrator | 2026-01-10 15:06:27.102123 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-01-10 15:06:27.102129 | orchestrator | Saturday 10 January 2026 15:06:22 +0000 (0:00:00.317) 0:00:13.972 ****** 2026-01-10 15:06:27.102134 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:06:27.102140 | orchestrator | 2026-01-10 15:06:27.102148 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-01-10 15:06:27.102156 | orchestrator | Saturday 10 January 2026 15:06:22 +0000 (0:00:00.118) 0:00:14.091 ****** 2026-01-10 15:06:27.102163 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:06:27.102168 | orchestrator | 2026-01-10 15:06:27.102174 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-10 15:06:27.102180 | orchestrator | Saturday 10 January 2026 15:06:22 +0000 (0:00:00.131) 0:00:14.223 ****** 2026-01-10 15:06:27.102186 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:06:27.102192 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:06:27.102198 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:06:27.102203 | orchestrator | 2026-01-10 15:06:27.102209 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-01-10 15:06:27.102215 | orchestrator | Saturday 10 January 2026 15:06:22 +0000 (0:00:00.306) 0:00:14.529 ****** 2026-01-10 15:06:27.102222 | orchestrator | changed: [testbed-node-3] 2026-01-10 15:06:27.102228 | orchestrator | changed: [testbed-node-4] 2026-01-10 15:06:27.102234 | orchestrator | changed: [testbed-node-5] 2026-01-10 15:06:27.102241 | orchestrator | 2026-01-10 15:06:27.102247 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-01-10 15:06:27.102256 | orchestrator | Saturday 10 January 2026 15:06:25 +0000 (0:00:02.815) 0:00:17.345 ****** 2026-01-10 15:06:27.102264 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:06:27.102270 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:06:27.102275 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:06:27.102281 | orchestrator | 2026-01-10 15:06:27.102287 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-01-10 15:06:27.102293 | orchestrator | Saturday 10 January 2026 15:06:26 +0000 (0:00:00.535) 0:00:17.881 ****** 2026-01-10 15:06:27.102298 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:06:27.102304 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:06:27.102309 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:06:27.102315 | orchestrator | 2026-01-10 15:06:27.102321 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-01-10 15:06:27.102326 | orchestrator | Saturday 10 January 2026 15:06:26 +0000 (0:00:00.511) 0:00:18.392 ****** 2026-01-10 15:06:27.102332 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:06:27.102338 | orchestrator | skipping: [testbed-node-4] 2026-01-10 15:06:27.102343 | orchestrator | skipping: [testbed-node-5] 2026-01-10 15:06:27.102348 | orchestrator | 2026-01-10 15:06:27.102362 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-01-10 15:06:36.716966 | orchestrator | Saturday 10 January 2026 15:06:27 +0000 (0:00:00.345) 0:00:18.738 ****** 2026-01-10 15:06:36.717072 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:06:36.717083 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:06:36.717089 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:06:36.717096 | orchestrator | 2026-01-10 15:06:36.717103 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-01-10 15:06:36.717111 | orchestrator | Saturday 10 January 2026 15:06:27 +0000 (0:00:00.532) 0:00:19.270 ****** 2026-01-10 15:06:36.717118 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:06:36.717126 | orchestrator | skipping: [testbed-node-4] 2026-01-10 15:06:36.717133 | orchestrator | skipping: [testbed-node-5] 2026-01-10 15:06:36.717161 | orchestrator | 2026-01-10 15:06:36.717168 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-01-10 15:06:36.717175 | orchestrator | Saturday 10 January 2026 15:06:27 +0000 (0:00:00.290) 0:00:19.560 ****** 2026-01-10 15:06:36.717181 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:06:36.717188 | orchestrator | skipping: [testbed-node-4] 2026-01-10 15:06:36.717195 | orchestrator | skipping: [testbed-node-5] 2026-01-10 15:06:36.717201 | orchestrator | 2026-01-10 15:06:36.717208 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-10 15:06:36.717228 | orchestrator | Saturday 10 January 2026 15:06:28 +0000 (0:00:00.333) 0:00:19.894 ****** 2026-01-10 15:06:36.717235 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:06:36.717241 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:06:36.717248 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:06:36.717254 | orchestrator | 2026-01-10 15:06:36.717260 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-01-10 15:06:36.717267 | orchestrator | Saturday 10 January 2026 15:06:28 +0000 (0:00:00.585) 0:00:20.479 ****** 2026-01-10 15:06:36.717273 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:06:36.717280 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:06:36.717287 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:06:36.717293 | orchestrator | 2026-01-10 15:06:36.717300 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-01-10 15:06:36.717306 | orchestrator | Saturday 10 January 2026 15:06:29 +0000 (0:00:01.043) 0:00:21.523 ****** 2026-01-10 15:06:36.717313 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:06:36.717319 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:06:36.717326 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:06:36.717332 | orchestrator | 2026-01-10 15:06:36.717339 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-01-10 15:06:36.717345 | orchestrator | Saturday 10 January 2026 15:06:30 +0000 (0:00:00.336) 0:00:21.860 ****** 2026-01-10 15:06:36.717351 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:06:36.717358 | orchestrator | skipping: [testbed-node-4] 2026-01-10 15:06:36.717365 | orchestrator | skipping: [testbed-node-5] 2026-01-10 15:06:36.717371 | orchestrator | 2026-01-10 15:06:36.717377 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-01-10 15:06:36.717384 | orchestrator | Saturday 10 January 2026 15:06:30 +0000 (0:00:00.300) 0:00:22.161 ****** 2026-01-10 15:06:36.717390 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:06:36.717397 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:06:36.717403 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:06:36.717410 | orchestrator | 2026-01-10 15:06:36.717416 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-01-10 15:06:36.717423 | orchestrator | Saturday 10 January 2026 15:06:30 +0000 (0:00:00.329) 0:00:22.490 ****** 2026-01-10 15:06:36.717430 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-10 15:06:36.717436 | orchestrator | 2026-01-10 15:06:36.717442 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-01-10 15:06:36.717448 | orchestrator | Saturday 10 January 2026 15:06:31 +0000 (0:00:00.289) 0:00:22.779 ****** 2026-01-10 15:06:36.717454 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:06:36.717461 | orchestrator | 2026-01-10 15:06:36.717467 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-10 15:06:36.717474 | orchestrator | Saturday 10 January 2026 15:06:31 +0000 (0:00:00.868) 0:00:23.648 ****** 2026-01-10 15:06:36.717480 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-10 15:06:36.717487 | orchestrator | 2026-01-10 15:06:36.717494 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-10 15:06:36.717500 | orchestrator | Saturday 10 January 2026 15:06:33 +0000 (0:00:01.664) 0:00:25.312 ****** 2026-01-10 15:06:36.717507 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-10 15:06:36.717514 | orchestrator | 2026-01-10 15:06:36.717521 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-10 15:06:36.717535 | orchestrator | Saturday 10 January 2026 15:06:33 +0000 (0:00:00.274) 0:00:25.586 ****** 2026-01-10 15:06:36.717543 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-10 15:06:36.717549 | orchestrator | 2026-01-10 15:06:36.717556 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:06:36.717563 | orchestrator | Saturday 10 January 2026 15:06:34 +0000 (0:00:00.274) 0:00:25.861 ****** 2026-01-10 15:06:36.717570 | orchestrator | 2026-01-10 15:06:36.717576 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:06:36.717583 | orchestrator | Saturday 10 January 2026 15:06:34 +0000 (0:00:00.072) 0:00:25.933 ****** 2026-01-10 15:06:36.717590 | orchestrator | 2026-01-10 15:06:36.717597 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:06:36.717604 | orchestrator | Saturday 10 January 2026 15:06:34 +0000 (0:00:00.073) 0:00:26.007 ****** 2026-01-10 15:06:36.717610 | orchestrator | 2026-01-10 15:06:36.717617 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-01-10 15:06:36.717624 | orchestrator | Saturday 10 January 2026 15:06:34 +0000 (0:00:00.075) 0:00:26.083 ****** 2026-01-10 15:06:36.717630 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-10 15:06:36.717637 | orchestrator | 2026-01-10 15:06:36.717643 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-10 15:06:36.717650 | orchestrator | Saturday 10 January 2026 15:06:35 +0000 (0:00:01.405) 0:00:27.488 ****** 2026-01-10 15:06:36.717672 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-01-10 15:06:36.717679 | orchestrator |  "msg": [ 2026-01-10 15:06:36.717685 | orchestrator |  "Validator run completed.", 2026-01-10 15:06:36.717692 | orchestrator |  "You can find the report file here:", 2026-01-10 15:06:36.717698 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-01-10T15:06:09+00:00-report.json", 2026-01-10 15:06:36.717706 | orchestrator |  "on the following host:", 2026-01-10 15:06:36.717713 | orchestrator |  "testbed-manager" 2026-01-10 15:06:36.717719 | orchestrator |  ] 2026-01-10 15:06:36.717726 | orchestrator | } 2026-01-10 15:06:36.717733 | orchestrator | 2026-01-10 15:06:36.717740 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 15:06:36.717747 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-10 15:06:36.717755 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-10 15:06:36.717765 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-10 15:06:36.717772 | orchestrator | 2026-01-10 15:06:36.717778 | orchestrator | 2026-01-10 15:06:36.717784 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 15:06:36.717791 | orchestrator | Saturday 10 January 2026 15:06:36 +0000 (0:00:00.433) 0:00:27.921 ****** 2026-01-10 15:06:36.717797 | orchestrator | =============================================================================== 2026-01-10 15:06:36.717804 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.82s 2026-01-10 15:06:36.717883 | orchestrator | Get ceph osd tree ------------------------------------------------------- 2.12s 2026-01-10 15:06:36.717891 | orchestrator | Aggregate test results step one ----------------------------------------- 1.66s 2026-01-10 15:06:36.717897 | orchestrator | Write report file ------------------------------------------------------- 1.41s 2026-01-10 15:06:36.717903 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 1.04s 2026-01-10 15:06:36.717910 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.87s 2026-01-10 15:06:36.717916 | orchestrator | Get timestamp for report file ------------------------------------------- 0.84s 2026-01-10 15:06:36.717928 | orchestrator | Create report output directory ------------------------------------------ 0.77s 2026-01-10 15:06:36.717934 | orchestrator | Aggregate test results step two ----------------------------------------- 0.69s 2026-01-10 15:06:36.717940 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.62s 2026-01-10 15:06:36.717946 | orchestrator | Prepare test data ------------------------------------------------------- 0.59s 2026-01-10 15:06:36.717953 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.56s 2026-01-10 15:06:36.717959 | orchestrator | Parse LVM data as JSON -------------------------------------------------- 0.54s 2026-01-10 15:06:36.717965 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.53s 2026-01-10 15:06:36.717971 | orchestrator | Prepare test data ------------------------------------------------------- 0.53s 2026-01-10 15:06:36.717978 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.53s 2026-01-10 15:06:36.717984 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.52s 2026-01-10 15:06:36.717990 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.51s 2026-01-10 15:06:36.717996 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.50s 2026-01-10 15:06:36.718003 | orchestrator | Aggregate test results step one ----------------------------------------- 0.50s 2026-01-10 15:06:37.108859 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-01-10 15:06:37.121273 | orchestrator | + set -e 2026-01-10 15:06:37.121371 | orchestrator | + source /opt/manager-vars.sh 2026-01-10 15:06:37.121385 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-10 15:06:37.121395 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-10 15:06:37.121403 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-10 15:06:37.121413 | orchestrator | ++ CEPH_VERSION=reef 2026-01-10 15:06:37.121422 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-10 15:06:37.121431 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-10 15:06:37.121439 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-10 15:06:37.121448 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-10 15:06:37.121456 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-10 15:06:37.121463 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-10 15:06:37.121471 | orchestrator | ++ export ARA=false 2026-01-10 15:06:37.121481 | orchestrator | ++ ARA=false 2026-01-10 15:06:37.121489 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-10 15:06:37.121498 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-10 15:06:37.121507 | orchestrator | ++ export TEMPEST=false 2026-01-10 15:06:37.121515 | orchestrator | ++ TEMPEST=false 2026-01-10 15:06:37.121523 | orchestrator | ++ export IS_ZUUL=true 2026-01-10 15:06:37.121532 | orchestrator | ++ IS_ZUUL=true 2026-01-10 15:06:37.121538 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.106 2026-01-10 15:06:37.121543 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.106 2026-01-10 15:06:37.121549 | orchestrator | ++ export EXTERNAL_API=false 2026-01-10 15:06:37.121554 | orchestrator | ++ EXTERNAL_API=false 2026-01-10 15:06:37.121559 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-10 15:06:37.121564 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-10 15:06:37.121570 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-10 15:06:37.121579 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-10 15:06:37.121587 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-10 15:06:37.121595 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-10 15:06:37.121603 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-01-10 15:06:37.121612 | orchestrator | + source /etc/os-release 2026-01-10 15:06:37.121619 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.3 LTS' 2026-01-10 15:06:37.121626 | orchestrator | ++ NAME=Ubuntu 2026-01-10 15:06:37.121633 | orchestrator | ++ VERSION_ID=24.04 2026-01-10 15:06:37.121641 | orchestrator | ++ VERSION='24.04.3 LTS (Noble Numbat)' 2026-01-10 15:06:37.121649 | orchestrator | ++ VERSION_CODENAME=noble 2026-01-10 15:06:37.121656 | orchestrator | ++ ID=ubuntu 2026-01-10 15:06:37.121665 | orchestrator | ++ ID_LIKE=debian 2026-01-10 15:06:37.121672 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-01-10 15:06:37.121680 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-01-10 15:06:37.121687 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-01-10 15:06:37.121697 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-01-10 15:06:37.121705 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-01-10 15:06:37.121738 | orchestrator | ++ LOGO=ubuntu-logo 2026-01-10 15:06:37.121746 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-01-10 15:06:37.121755 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-01-10 15:06:37.121764 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-01-10 15:06:37.151685 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-01-10 15:07:01.146605 | orchestrator | 2026-01-10 15:07:01.146703 | orchestrator | # Status of Elasticsearch 2026-01-10 15:07:01.146711 | orchestrator | 2026-01-10 15:07:01.146716 | orchestrator | + pushd /opt/configuration/contrib 2026-01-10 15:07:01.146721 | orchestrator | + echo 2026-01-10 15:07:01.146725 | orchestrator | + echo '# Status of Elasticsearch' 2026-01-10 15:07:01.146729 | orchestrator | + echo 2026-01-10 15:07:01.146734 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-01-10 15:07:01.327680 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-01-10 15:07:01.328085 | orchestrator | 2026-01-10 15:07:01.328181 | orchestrator | # Status of MariaDB 2026-01-10 15:07:01.328195 | orchestrator | 2026-01-10 15:07:01.328204 | orchestrator | + echo 2026-01-10 15:07:01.328213 | orchestrator | + echo '# Status of MariaDB' 2026-01-10 15:07:01.328221 | orchestrator | + echo 2026-01-10 15:07:01.329224 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-01-10 15:07:01.395260 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-10 15:07:01.395353 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-01-10 15:07:01.395364 | orchestrator | + MARIADB_USER=root_shard_0 2026-01-10 15:07:01.395374 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-01-10 15:07:01.469406 | orchestrator | Reading package lists... 2026-01-10 15:07:01.817039 | orchestrator | Building dependency tree... 2026-01-10 15:07:01.817590 | orchestrator | Reading state information... 2026-01-10 15:07:02.280382 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-01-10 15:07:02.280470 | orchestrator | bc set to manually installed. 2026-01-10 15:07:02.280480 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2026-01-10 15:07:02.975182 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-01-10 15:07:02.976596 | orchestrator | 2026-01-10 15:07:02.976642 | orchestrator | # Status of Prometheus 2026-01-10 15:07:02.976652 | orchestrator | 2026-01-10 15:07:02.976661 | orchestrator | + echo 2026-01-10 15:07:02.976670 | orchestrator | + echo '# Status of Prometheus' 2026-01-10 15:07:02.976678 | orchestrator | + echo 2026-01-10 15:07:02.976686 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-01-10 15:07:03.038166 | orchestrator | Unauthorized 2026-01-10 15:07:03.042306 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-01-10 15:07:03.106243 | orchestrator | Unauthorized 2026-01-10 15:07:03.110157 | orchestrator | 2026-01-10 15:07:03.110246 | orchestrator | # Status of RabbitMQ 2026-01-10 15:07:03.110260 | orchestrator | 2026-01-10 15:07:03.110270 | orchestrator | + echo 2026-01-10 15:07:03.110280 | orchestrator | + echo '# Status of RabbitMQ' 2026-01-10 15:07:03.110289 | orchestrator | + echo 2026-01-10 15:07:03.111021 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-01-10 15:07:03.165906 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-10 15:07:03.165987 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-01-10 15:07:03.165998 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-01-10 15:07:03.653000 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-01-10 15:07:03.660705 | orchestrator | 2026-01-10 15:07:03.660770 | orchestrator | # Status of Redis 2026-01-10 15:07:03.660776 | orchestrator | 2026-01-10 15:07:03.660781 | orchestrator | + echo 2026-01-10 15:07:03.660786 | orchestrator | + echo '# Status of Redis' 2026-01-10 15:07:03.660790 | orchestrator | + echo 2026-01-10 15:07:03.660796 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-01-10 15:07:03.665428 | orchestrator | TCP OK - 0.001 second response time on 192.168.16.10 port 6379|time=0.001459s;;;0.000000;10.000000 2026-01-10 15:07:03.665515 | orchestrator | + popd 2026-01-10 15:07:03.665521 | orchestrator | 2026-01-10 15:07:03.665526 | orchestrator | # Create backup of MariaDB database 2026-01-10 15:07:03.665530 | orchestrator | 2026-01-10 15:07:03.665534 | orchestrator | + echo 2026-01-10 15:07:03.665539 | orchestrator | + echo '# Create backup of MariaDB database' 2026-01-10 15:07:03.665543 | orchestrator | + echo 2026-01-10 15:07:03.665547 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-01-10 15:07:05.737186 | orchestrator | 2026-01-10 15:07:05 | INFO  | Task 921d7606-31cb-481d-af93-a61b5a331693 (mariadb_backup) was prepared for execution. 2026-01-10 15:07:05.737284 | orchestrator | 2026-01-10 15:07:05 | INFO  | It takes a moment until task 921d7606-31cb-481d-af93-a61b5a331693 (mariadb_backup) has been started and output is visible here. 2026-01-10 15:08:14.917247 | orchestrator | 2026-01-10 15:08:14.917354 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 15:08:14.917367 | orchestrator | 2026-01-10 15:08:14.917375 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 15:08:14.917382 | orchestrator | Saturday 10 January 2026 15:07:09 +0000 (0:00:00.171) 0:00:00.171 ****** 2026-01-10 15:08:14.917389 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:08:14.917397 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:08:14.917403 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:08:14.917410 | orchestrator | 2026-01-10 15:08:14.917415 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 15:08:14.917419 | orchestrator | Saturday 10 January 2026 15:07:10 +0000 (0:00:00.365) 0:00:00.536 ****** 2026-01-10 15:08:14.917424 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-01-10 15:08:14.917429 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-01-10 15:08:14.917434 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-01-10 15:08:14.917438 | orchestrator | 2026-01-10 15:08:14.917441 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-01-10 15:08:14.917445 | orchestrator | 2026-01-10 15:08:14.917449 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-01-10 15:08:14.917453 | orchestrator | Saturday 10 January 2026 15:07:10 +0000 (0:00:00.591) 0:00:01.127 ****** 2026-01-10 15:08:14.917457 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-10 15:08:14.917461 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-10 15:08:14.917464 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-10 15:08:14.917468 | orchestrator | 2026-01-10 15:08:14.917472 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-10 15:08:14.917476 | orchestrator | Saturday 10 January 2026 15:07:11 +0000 (0:00:00.396) 0:00:01.524 ****** 2026-01-10 15:08:14.917480 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 15:08:14.917485 | orchestrator | 2026-01-10 15:08:14.917489 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-01-10 15:08:14.917493 | orchestrator | Saturday 10 January 2026 15:07:11 +0000 (0:00:00.579) 0:00:02.103 ****** 2026-01-10 15:08:14.917551 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:08:14.917561 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:08:14.917565 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:08:14.917569 | orchestrator | 2026-01-10 15:08:14.917573 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-01-10 15:08:14.917576 | orchestrator | Saturday 10 January 2026 15:07:15 +0000 (0:00:03.176) 0:00:05.280 ****** 2026-01-10 15:08:14.917580 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-01-10 15:08:14.917584 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-01-10 15:08:14.917589 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-10 15:08:14.917593 | orchestrator | mariadb_bootstrap_restart 2026-01-10 15:08:14.917628 | orchestrator | skipping: [testbed-node-1] 2026-01-10 15:08:14.917632 | orchestrator | skipping: [testbed-node-2] 2026-01-10 15:08:14.917636 | orchestrator | changed: [testbed-node-0] 2026-01-10 15:08:14.917640 | orchestrator | 2026-01-10 15:08:14.917643 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-01-10 15:08:14.917647 | orchestrator | skipping: no hosts matched 2026-01-10 15:08:14.917651 | orchestrator | 2026-01-10 15:08:14.917654 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-10 15:08:14.917658 | orchestrator | skipping: no hosts matched 2026-01-10 15:08:14.917662 | orchestrator | 2026-01-10 15:08:14.917665 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-01-10 15:08:14.917669 | orchestrator | skipping: no hosts matched 2026-01-10 15:08:14.917673 | orchestrator | 2026-01-10 15:08:14.917676 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-01-10 15:08:14.917680 | orchestrator | 2026-01-10 15:08:14.917684 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-01-10 15:08:14.917687 | orchestrator | Saturday 10 January 2026 15:08:13 +0000 (0:00:58.808) 0:01:04.089 ****** 2026-01-10 15:08:14.917691 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:08:14.917695 | orchestrator | skipping: [testbed-node-1] 2026-01-10 15:08:14.917698 | orchestrator | skipping: [testbed-node-2] 2026-01-10 15:08:14.917702 | orchestrator | 2026-01-10 15:08:14.917705 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-01-10 15:08:14.917709 | orchestrator | Saturday 10 January 2026 15:08:14 +0000 (0:00:00.322) 0:01:04.411 ****** 2026-01-10 15:08:14.917713 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:08:14.917717 | orchestrator | skipping: [testbed-node-1] 2026-01-10 15:08:14.917720 | orchestrator | skipping: [testbed-node-2] 2026-01-10 15:08:14.917724 | orchestrator | 2026-01-10 15:08:14.917728 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 15:08:14.917732 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 15:08:14.917737 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-10 15:08:14.917741 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-10 15:08:14.917745 | orchestrator | 2026-01-10 15:08:14.917749 | orchestrator | 2026-01-10 15:08:14.917753 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 15:08:14.917756 | orchestrator | Saturday 10 January 2026 15:08:14 +0000 (0:00:00.407) 0:01:04.819 ****** 2026-01-10 15:08:14.917760 | orchestrator | =============================================================================== 2026-01-10 15:08:14.917764 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 58.81s 2026-01-10 15:08:14.917780 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.18s 2026-01-10 15:08:14.917784 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.59s 2026-01-10 15:08:14.917788 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.58s 2026-01-10 15:08:14.917791 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.41s 2026-01-10 15:08:14.917795 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.40s 2026-01-10 15:08:14.917799 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.37s 2026-01-10 15:08:14.917804 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.32s 2026-01-10 15:08:15.263861 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-01-10 15:08:15.274434 | orchestrator | + set -e 2026-01-10 15:08:15.274512 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-10 15:08:15.274607 | orchestrator | ++ export INTERACTIVE=false 2026-01-10 15:08:15.274620 | orchestrator | ++ INTERACTIVE=false 2026-01-10 15:08:15.274741 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-10 15:08:15.274854 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-10 15:08:15.275539 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-01-10 15:08:15.276815 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-01-10 15:08:15.280302 | orchestrator | 2026-01-10 15:08:15.280369 | orchestrator | # OpenStack endpoints 2026-01-10 15:08:15.280376 | orchestrator | 2026-01-10 15:08:15.280381 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-10 15:08:15.280385 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-10 15:08:15.280389 | orchestrator | + export OS_CLOUD=admin 2026-01-10 15:08:15.280393 | orchestrator | + OS_CLOUD=admin 2026-01-10 15:08:15.280398 | orchestrator | + echo 2026-01-10 15:08:15.280402 | orchestrator | + echo '# OpenStack endpoints' 2026-01-10 15:08:15.280406 | orchestrator | + echo 2026-01-10 15:08:15.280409 | orchestrator | + openstack endpoint list 2026-01-10 15:08:18.689329 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-01-10 15:08:18.689425 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-01-10 15:08:18.689435 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-01-10 15:08:18.689443 | orchestrator | | 0003d4f479934d98919f0e0b431ad016 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-01-10 15:08:18.689448 | orchestrator | | 01472d6ad2714e5585eabfa0ca2c4790 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-01-10 15:08:18.689464 | orchestrator | | 045b2e4c439d419c813d3ac4c9062068 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-01-10 15:08:18.689468 | orchestrator | | 056871e3986b4c9c9d6b6b1ac6769a13 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-01-10 15:08:18.689472 | orchestrator | | 0a8930f16f6e4f609457a2ec91945dac | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-01-10 15:08:18.689476 | orchestrator | | 1c8d55e5e50e47b4b00551c47e487cb5 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-01-10 15:08:18.689480 | orchestrator | | 2ce5bf765bdd4e41bae0385c50c52ca8 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-01-10 15:08:18.689483 | orchestrator | | 48dfb570747947e6ba3a6cc034c83edf | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-01-10 15:08:18.689487 | orchestrator | | 4f85d969d18a4b7a865ad418f164b134 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-01-10 15:08:18.689492 | orchestrator | | 4faad278d413498c8db30b1ea123eee8 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-01-10 15:08:18.689534 | orchestrator | | 516762c8429944abbaa4fa3220f5129d | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-01-10 15:08:18.689540 | orchestrator | | 57cf499ae5a34a34a17f2439a0154ca9 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-01-10 15:08:18.689544 | orchestrator | | 6efbfb8b080b453881bae092c0961b1d | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-01-10 15:08:18.689563 | orchestrator | | 7771933fd1c24bb3902cc9cffe969ec9 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-01-10 15:08:18.689567 | orchestrator | | 77f79927dbd6419780068e0e1c890d8c | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-01-10 15:08:18.689571 | orchestrator | | 82da6531081b4f178c577d425e84531d | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-01-10 15:08:18.689575 | orchestrator | | 8b29d1bcb38c4af4a7c01265e786d91a | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-01-10 15:08:18.689579 | orchestrator | | 92437adfc48342dabe72dde16017d8de | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-01-10 15:08:18.689582 | orchestrator | | b4f52ebdb3f44514945379a9a60ce55a | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-01-10 15:08:18.689586 | orchestrator | | c753bf32a436402bbf1c5c6faad9cb03 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-01-10 15:08:18.689600 | orchestrator | | e3d81d76db7b4be68f1ba67dd1800764 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-01-10 15:08:18.689604 | orchestrator | | f7124d078bfd4d50abf8659973caaf8c | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-01-10 15:08:18.689608 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-01-10 15:08:18.962679 | orchestrator | 2026-01-10 15:08:18.962758 | orchestrator | # Cinder 2026-01-10 15:08:18.962768 | orchestrator | 2026-01-10 15:08:18.962776 | orchestrator | + echo 2026-01-10 15:08:18.962783 | orchestrator | + echo '# Cinder' 2026-01-10 15:08:18.962789 | orchestrator | + echo 2026-01-10 15:08:18.962795 | orchestrator | + openstack volume service list 2026-01-10 15:08:21.627428 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-01-10 15:08:21.627500 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-01-10 15:08:21.627506 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-01-10 15:08:21.627511 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-01-10T15:08:19.000000 | 2026-01-10 15:08:21.627528 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-01-10T15:08:19.000000 | 2026-01-10 15:08:21.627532 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-01-10T15:08:19.000000 | 2026-01-10 15:08:21.627537 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-01-10T15:08:19.000000 | 2026-01-10 15:08:21.627541 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-01-10T15:08:12.000000 | 2026-01-10 15:08:21.627545 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-01-10T15:08:14.000000 | 2026-01-10 15:08:21.627549 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-01-10T15:08:21.000000 | 2026-01-10 15:08:21.627553 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-01-10T15:08:13.000000 | 2026-01-10 15:08:21.627558 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-01-10T15:08:13.000000 | 2026-01-10 15:08:21.627562 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-01-10 15:08:21.894500 | orchestrator | 2026-01-10 15:08:21.894571 | orchestrator | # Neutron 2026-01-10 15:08:21.894577 | orchestrator | 2026-01-10 15:08:21.894582 | orchestrator | + echo 2026-01-10 15:08:21.894586 | orchestrator | + echo '# Neutron' 2026-01-10 15:08:21.894591 | orchestrator | + echo 2026-01-10 15:08:21.894595 | orchestrator | + openstack network agent list 2026-01-10 15:08:24.758421 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-01-10 15:08:24.758501 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-01-10 15:08:24.758507 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-01-10 15:08:24.758512 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-01-10 15:08:24.758516 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-01-10 15:08:24.758520 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-01-10 15:08:24.758524 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-01-10 15:08:24.758528 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-01-10 15:08:24.758531 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-01-10 15:08:24.758535 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-01-10 15:08:24.758539 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-01-10 15:08:24.758542 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-01-10 15:08:24.758546 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-01-10 15:08:25.034002 | orchestrator | + openstack network service provider list 2026-01-10 15:08:27.605275 | orchestrator | +---------------+------+---------+ 2026-01-10 15:08:27.605394 | orchestrator | | Service Type | Name | Default | 2026-01-10 15:08:27.605406 | orchestrator | +---------------+------+---------+ 2026-01-10 15:08:27.605414 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-01-10 15:08:27.605419 | orchestrator | +---------------+------+---------+ 2026-01-10 15:08:27.889389 | orchestrator | 2026-01-10 15:08:27.889474 | orchestrator | + echo 2026-01-10 15:08:27.889483 | orchestrator | + echo '# Nova' 2026-01-10 15:08:27.889975 | orchestrator | # Nova 2026-01-10 15:08:27.890064 | orchestrator | 2026-01-10 15:08:27.890076 | orchestrator | + echo 2026-01-10 15:08:27.890086 | orchestrator | + openstack compute service list 2026-01-10 15:08:30.660883 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-01-10 15:08:30.661068 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-01-10 15:08:30.661082 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-01-10 15:08:30.661090 | orchestrator | | d818f9d1-6231-499b-8793-f99e6cb4c4c5 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-01-10T15:08:23.000000 | 2026-01-10 15:08:30.661097 | orchestrator | | 47be383a-268e-4454-ae7c-cf4d64768030 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-01-10T15:08:22.000000 | 2026-01-10 15:08:30.661128 | orchestrator | | 69f080c8-3099-4607-9d20-cde1c3ba31ad | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-01-10T15:08:23.000000 | 2026-01-10 15:08:30.661136 | orchestrator | | 85402a2b-06ae-4d3b-80e7-252dc568abbc | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-01-10T15:08:24.000000 | 2026-01-10 15:08:30.661143 | orchestrator | | 2cc7ca72-9fe2-470e-b48a-fe99f80d3b40 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-01-10T15:08:26.000000 | 2026-01-10 15:08:30.661150 | orchestrator | | 02ce7682-3709-4d6f-b073-b55b17fb3eaa | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-01-10T15:08:26.000000 | 2026-01-10 15:08:30.661158 | orchestrator | | 68df5b12-b7d1-4c01-932f-6e1cfb045ea6 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-01-10T15:08:27.000000 | 2026-01-10 15:08:30.661164 | orchestrator | | a64fdeab-885f-4802-b352-0bae10a4cd28 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-01-10T15:08:28.000000 | 2026-01-10 15:08:30.661171 | orchestrator | | 7fa97357-9bb7-4e25-b42a-7082dc4cbfce | nova-compute | testbed-node-5 | nova | enabled | up | 2026-01-10T15:08:28.000000 | 2026-01-10 15:08:30.661178 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-01-10 15:08:30.953836 | orchestrator | + openstack hypervisor list 2026-01-10 15:08:33.597143 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-01-10 15:08:33.597228 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-01-10 15:08:33.597239 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-01-10 15:08:33.597246 | orchestrator | | 614da362-3ae4-4a0f-86d4-5da43430ed83 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-01-10 15:08:33.597253 | orchestrator | | c0576e4a-42ef-4701-a124-94934a72692b | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-01-10 15:08:33.597260 | orchestrator | | df27f493-2ca6-4ff6-a5f8-6a4fdd528e0e | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-01-10 15:08:33.597266 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-01-10 15:08:33.899384 | orchestrator | 2026-01-10 15:08:33.899471 | orchestrator | # Run OpenStack test play 2026-01-10 15:08:33.899481 | orchestrator | 2026-01-10 15:08:33.899487 | orchestrator | + echo 2026-01-10 15:08:33.899493 | orchestrator | + echo '# Run OpenStack test play' 2026-01-10 15:08:33.899501 | orchestrator | + echo 2026-01-10 15:08:33.899507 | orchestrator | + osism apply --environment openstack test 2026-01-10 15:08:35.875105 | orchestrator | 2026-01-10 15:08:35 | INFO  | Trying to run play test in environment openstack 2026-01-10 15:08:45.957466 | orchestrator | 2026-01-10 15:08:45 | INFO  | Task cd2873dc-743f-4fb4-be96-8c680e3b43ef (test) was prepared for execution. 2026-01-10 15:08:45.957549 | orchestrator | 2026-01-10 15:08:45 | INFO  | It takes a moment until task cd2873dc-743f-4fb4-be96-8c680e3b43ef (test) has been started and output is visible here. 2026-01-10 15:15:49.414966 | orchestrator | 2026-01-10 15:15:49.415077 | orchestrator | PLAY [Create test project] ***************************************************** 2026-01-10 15:15:49.415091 | orchestrator | 2026-01-10 15:15:49.415098 | orchestrator | TASK [Create test domain] ****************************************************** 2026-01-10 15:15:49.415181 | orchestrator | Saturday 10 January 2026 15:08:50 +0000 (0:00:00.071) 0:00:00.071 ****** 2026-01-10 15:15:49.415189 | orchestrator | changed: [localhost] 2026-01-10 15:15:49.415196 | orchestrator | 2026-01-10 15:15:49.415202 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-01-10 15:15:49.415208 | orchestrator | Saturday 10 January 2026 15:08:53 +0000 (0:00:03.636) 0:00:03.708 ****** 2026-01-10 15:15:49.415214 | orchestrator | changed: [localhost] 2026-01-10 15:15:49.415220 | orchestrator | 2026-01-10 15:15:49.415226 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-01-10 15:15:49.415248 | orchestrator | Saturday 10 January 2026 15:08:57 +0000 (0:00:04.101) 0:00:07.809 ****** 2026-01-10 15:15:49.415278 | orchestrator | changed: [localhost] 2026-01-10 15:15:49.415285 | orchestrator | 2026-01-10 15:15:49.415291 | orchestrator | TASK [Create test project] ***************************************************** 2026-01-10 15:15:49.415297 | orchestrator | Saturday 10 January 2026 15:09:04 +0000 (0:00:06.343) 0:00:14.153 ****** 2026-01-10 15:15:49.415303 | orchestrator | changed: [localhost] 2026-01-10 15:15:49.415310 | orchestrator | 2026-01-10 15:15:49.415316 | orchestrator | TASK [Create test user] ******************************************************** 2026-01-10 15:15:49.415322 | orchestrator | Saturday 10 January 2026 15:09:08 +0000 (0:00:04.045) 0:00:18.199 ****** 2026-01-10 15:15:49.415328 | orchestrator | changed: [localhost] 2026-01-10 15:15:49.415334 | orchestrator | 2026-01-10 15:15:49.415341 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-01-10 15:15:49.415347 | orchestrator | Saturday 10 January 2026 15:09:12 +0000 (0:00:04.071) 0:00:22.270 ****** 2026-01-10 15:15:49.415353 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-01-10 15:15:49.415360 | orchestrator | changed: [localhost] => (item=member) 2026-01-10 15:15:49.415367 | orchestrator | changed: [localhost] => (item=creator) 2026-01-10 15:15:49.415373 | orchestrator | 2026-01-10 15:15:49.415380 | orchestrator | TASK [Create test server group] ************************************************ 2026-01-10 15:15:49.415386 | orchestrator | Saturday 10 January 2026 15:09:23 +0000 (0:00:11.091) 0:00:33.361 ****** 2026-01-10 15:15:49.415392 | orchestrator | changed: [localhost] 2026-01-10 15:15:49.415398 | orchestrator | 2026-01-10 15:15:49.415404 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-01-10 15:15:49.415410 | orchestrator | Saturday 10 January 2026 15:09:27 +0000 (0:00:04.090) 0:00:37.452 ****** 2026-01-10 15:15:49.415417 | orchestrator | changed: [localhost] 2026-01-10 15:15:49.415423 | orchestrator | 2026-01-10 15:15:49.415429 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-01-10 15:15:49.415451 | orchestrator | Saturday 10 January 2026 15:09:32 +0000 (0:00:05.009) 0:00:42.461 ****** 2026-01-10 15:15:49.415457 | orchestrator | changed: [localhost] 2026-01-10 15:15:49.415463 | orchestrator | 2026-01-10 15:15:49.415469 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-01-10 15:15:49.415475 | orchestrator | Saturday 10 January 2026 15:09:36 +0000 (0:00:04.187) 0:00:46.649 ****** 2026-01-10 15:15:49.415481 | orchestrator | changed: [localhost] 2026-01-10 15:15:49.415488 | orchestrator | 2026-01-10 15:15:49.415494 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-01-10 15:15:49.415500 | orchestrator | Saturday 10 January 2026 15:09:40 +0000 (0:00:03.964) 0:00:50.614 ****** 2026-01-10 15:15:49.415506 | orchestrator | changed: [localhost] 2026-01-10 15:15:49.415512 | orchestrator | 2026-01-10 15:15:49.415517 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-01-10 15:15:49.415524 | orchestrator | Saturday 10 January 2026 15:09:44 +0000 (0:00:03.916) 0:00:54.531 ****** 2026-01-10 15:15:49.415531 | orchestrator | changed: [localhost] 2026-01-10 15:15:49.415538 | orchestrator | 2026-01-10 15:15:49.415544 | orchestrator | TASK [Create test network] ***************************************************** 2026-01-10 15:15:49.415551 | orchestrator | Saturday 10 January 2026 15:09:48 +0000 (0:00:03.780) 0:00:58.311 ****** 2026-01-10 15:15:49.415557 | orchestrator | changed: [localhost] 2026-01-10 15:15:49.415563 | orchestrator | 2026-01-10 15:15:49.415569 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-01-10 15:15:49.415576 | orchestrator | Saturday 10 January 2026 15:09:53 +0000 (0:00:04.875) 0:01:03.187 ****** 2026-01-10 15:15:49.415584 | orchestrator | changed: [localhost] 2026-01-10 15:15:49.415591 | orchestrator | 2026-01-10 15:15:49.415596 | orchestrator | TASK [Create test router] ****************************************************** 2026-01-10 15:15:49.415603 | orchestrator | Saturday 10 January 2026 15:09:58 +0000 (0:00:05.280) 0:01:08.467 ****** 2026-01-10 15:15:49.415609 | orchestrator | changed: [localhost] 2026-01-10 15:15:49.415615 | orchestrator | 2026-01-10 15:15:49.415630 | orchestrator | TASK [Create test instances] *************************************************** 2026-01-10 15:15:49.415637 | orchestrator | Saturday 10 January 2026 15:10:09 +0000 (0:00:11.458) 0:01:19.925 ****** 2026-01-10 15:15:49.415643 | orchestrator | changed: [localhost] => (item=test) 2026-01-10 15:15:49.415651 | orchestrator | changed: [localhost] => (item=test-1) 2026-01-10 15:15:49.415658 | orchestrator | 2026-01-10 15:15:49.415669 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2026-01-10 15:15:49.415677 | orchestrator | 2026-01-10 15:15:49.415683 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2026-01-10 15:15:49.415689 | orchestrator | changed: [localhost] => (item=test-2) 2026-01-10 15:15:49.415695 | orchestrator | 2026-01-10 15:15:49.415700 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2026-01-10 15:15:49.415707 | orchestrator | changed: [localhost] => (item=test-3) 2026-01-10 15:15:49.415713 | orchestrator | 2026-01-10 15:15:49.415719 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2026-01-10 15:15:49.415725 | orchestrator | 2026-01-10 15:15:49.415732 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2026-01-10 15:15:49.415738 | orchestrator | changed: [localhost] => (item=test-4) 2026-01-10 15:15:49.415743 | orchestrator | 2026-01-10 15:15:49.415748 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-01-10 15:15:49.415776 | orchestrator | Saturday 10 January 2026 15:14:26 +0000 (0:04:16.977) 0:05:36.903 ****** 2026-01-10 15:15:49.415784 | orchestrator | changed: [localhost] => (item=test) 2026-01-10 15:15:49.415790 | orchestrator | changed: [localhost] => (item=test-1) 2026-01-10 15:15:49.415796 | orchestrator | changed: [localhost] => (item=test-2) 2026-01-10 15:15:49.415802 | orchestrator | changed: [localhost] => (item=test-3) 2026-01-10 15:15:49.415808 | orchestrator | changed: [localhost] => (item=test-4) 2026-01-10 15:15:49.415813 | orchestrator | 2026-01-10 15:15:49.415820 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-01-10 15:15:49.415827 | orchestrator | Saturday 10 January 2026 15:14:50 +0000 (0:00:23.244) 0:06:00.148 ****** 2026-01-10 15:15:49.415833 | orchestrator | changed: [localhost] => (item=test) 2026-01-10 15:15:49.415839 | orchestrator | changed: [localhost] => (item=test-1) 2026-01-10 15:15:49.415846 | orchestrator | changed: [localhost] => (item=test-2) 2026-01-10 15:15:49.415851 | orchestrator | changed: [localhost] => (item=test-3) 2026-01-10 15:15:49.415857 | orchestrator | changed: [localhost] => (item=test-4) 2026-01-10 15:15:49.415863 | orchestrator | 2026-01-10 15:15:49.415869 | orchestrator | TASK [Create test volume] ****************************************************** 2026-01-10 15:15:49.415876 | orchestrator | Saturday 10 January 2026 15:15:24 +0000 (0:00:33.906) 0:06:34.055 ****** 2026-01-10 15:15:49.415882 | orchestrator | changed: [localhost] 2026-01-10 15:15:49.415888 | orchestrator | 2026-01-10 15:15:49.415895 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-01-10 15:15:49.415901 | orchestrator | Saturday 10 January 2026 15:15:30 +0000 (0:00:06.537) 0:06:40.592 ****** 2026-01-10 15:15:49.415906 | orchestrator | changed: [localhost] 2026-01-10 15:15:49.415912 | orchestrator | 2026-01-10 15:15:49.415918 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-01-10 15:15:49.415924 | orchestrator | Saturday 10 January 2026 15:15:44 +0000 (0:00:13.391) 0:06:53.984 ****** 2026-01-10 15:15:49.415930 | orchestrator | ok: [localhost] 2026-01-10 15:15:49.415937 | orchestrator | 2026-01-10 15:15:49.415945 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-01-10 15:15:49.415949 | orchestrator | Saturday 10 January 2026 15:15:49 +0000 (0:00:05.093) 0:06:59.077 ****** 2026-01-10 15:15:49.415953 | orchestrator | ok: [localhost] => { 2026-01-10 15:15:49.415957 | orchestrator |  "msg": "192.168.112.112" 2026-01-10 15:15:49.415961 | orchestrator | } 2026-01-10 15:15:49.415965 | orchestrator | 2026-01-10 15:15:49.415969 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 15:15:49.415973 | orchestrator | localhost : ok=22  changed=20  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 15:15:49.415984 | orchestrator | 2026-01-10 15:15:49.415988 | orchestrator | 2026-01-10 15:15:49.415992 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 15:15:49.416002 | orchestrator | Saturday 10 January 2026 15:15:49 +0000 (0:00:00.037) 0:06:59.115 ****** 2026-01-10 15:15:49.416006 | orchestrator | =============================================================================== 2026-01-10 15:15:49.416010 | orchestrator | Create test instances ------------------------------------------------- 256.98s 2026-01-10 15:15:49.416014 | orchestrator | Add tag to instances --------------------------------------------------- 33.91s 2026-01-10 15:15:49.416017 | orchestrator | Add metadata to instances ---------------------------------------------- 23.24s 2026-01-10 15:15:49.416021 | orchestrator | Attach test volume ----------------------------------------------------- 13.39s 2026-01-10 15:15:49.416025 | orchestrator | Create test router ----------------------------------------------------- 11.46s 2026-01-10 15:15:49.416028 | orchestrator | Add member roles to user test ------------------------------------------ 11.09s 2026-01-10 15:15:49.416032 | orchestrator | Create test volume ------------------------------------------------------ 6.54s 2026-01-10 15:15:49.416036 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.34s 2026-01-10 15:15:49.416039 | orchestrator | Create test subnet ------------------------------------------------------ 5.28s 2026-01-10 15:15:49.416043 | orchestrator | Create floating ip address ---------------------------------------------- 5.09s 2026-01-10 15:15:49.416047 | orchestrator | Create ssh security group ----------------------------------------------- 5.01s 2026-01-10 15:15:49.416050 | orchestrator | Create test network ----------------------------------------------------- 4.88s 2026-01-10 15:15:49.416054 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.19s 2026-01-10 15:15:49.416058 | orchestrator | Create test-admin user -------------------------------------------------- 4.10s 2026-01-10 15:15:49.416062 | orchestrator | Create test server group ------------------------------------------------ 4.09s 2026-01-10 15:15:49.416065 | orchestrator | Create test user -------------------------------------------------------- 4.07s 2026-01-10 15:15:49.416069 | orchestrator | Create test project ----------------------------------------------------- 4.05s 2026-01-10 15:15:49.416073 | orchestrator | Create icmp security group ---------------------------------------------- 3.96s 2026-01-10 15:15:49.416076 | orchestrator | Add rule to icmp security group ----------------------------------------- 3.92s 2026-01-10 15:15:49.416080 | orchestrator | Create test keypair ----------------------------------------------------- 3.78s 2026-01-10 15:15:49.734824 | orchestrator | + server_list 2026-01-10 15:15:49.734914 | orchestrator | + openstack --os-cloud test server list 2026-01-10 15:15:53.423250 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-01-10 15:15:53.423346 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-01-10 15:15:53.423354 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-01-10 15:15:53.423359 | orchestrator | | 97961733-5d8f-42bb-95b8-644b1acc8730 | test-4 | ACTIVE | test=192.168.112.152, 192.168.200.63 | N/A (booted from volume) | SCS-1L-1 | 2026-01-10 15:15:53.423363 | orchestrator | | e4b75765-7650-495e-8220-080cfcd2610c | test-3 | ACTIVE | test=192.168.112.193, 192.168.200.180 | N/A (booted from volume) | SCS-1L-1 | 2026-01-10 15:15:53.423367 | orchestrator | | efc00d2b-21cb-4b3e-ab5a-f5934314596c | test-2 | ACTIVE | test=192.168.112.103, 192.168.200.225 | N/A (booted from volume) | SCS-1L-1 | 2026-01-10 15:15:53.423371 | orchestrator | | 772b9824-2b94-49ac-b8b8-a2b3e13bdb1d | test-1 | ACTIVE | test=192.168.112.137, 192.168.200.99 | N/A (booted from volume) | SCS-1L-1 | 2026-01-10 15:15:53.423375 | orchestrator | | 6745b3ca-d678-4499-8c93-e6c81df4b428 | test | ACTIVE | test=192.168.112.112, 192.168.200.64 | N/A (booted from volume) | SCS-1L-1 | 2026-01-10 15:15:53.423399 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-01-10 15:15:53.711770 | orchestrator | + openstack --os-cloud test server show test 2026-01-10 15:15:56.806223 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:15:56.806304 | orchestrator | | Field | Value | 2026-01-10 15:15:56.806313 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:15:56.806318 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-10 15:15:56.806322 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-10 15:15:56.806326 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-10 15:15:56.806330 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-01-10 15:15:56.806334 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-10 15:15:56.806338 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-10 15:15:56.806372 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-10 15:15:56.806383 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-10 15:15:56.806387 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-10 15:15:56.806393 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-10 15:15:56.806397 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-10 15:15:56.806401 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-10 15:15:56.806405 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-10 15:15:56.806409 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-10 15:15:56.806413 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-10 15:15:56.806420 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-10T15:10:56.000000 | 2026-01-10 15:15:56.806427 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-10 15:15:56.806431 | orchestrator | | accessIPv4 | | 2026-01-10 15:15:56.806435 | orchestrator | | accessIPv6 | | 2026-01-10 15:15:56.806439 | orchestrator | | addresses | test=192.168.112.112, 192.168.200.64 | 2026-01-10 15:15:56.806443 | orchestrator | | config_drive | | 2026-01-10 15:15:56.806447 | orchestrator | | created | 2026-01-10T15:10:18Z | 2026-01-10 15:15:56.806451 | orchestrator | | description | None | 2026-01-10 15:15:56.806458 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-10 15:15:56.806465 | orchestrator | | hostId | 97742aa53e940013f2b04ad38e7e84cc116956d671c75a7078c17588 | 2026-01-10 15:15:56.806469 | orchestrator | | host_status | None | 2026-01-10 15:15:56.806477 | orchestrator | | id | 6745b3ca-d678-4499-8c93-e6c81df4b428 | 2026-01-10 15:15:56.806481 | orchestrator | | image | N/A (booted from volume) | 2026-01-10 15:15:56.806485 | orchestrator | | key_name | test | 2026-01-10 15:15:56.806491 | orchestrator | | locked | False | 2026-01-10 15:15:56.806495 | orchestrator | | locked_reason | None | 2026-01-10 15:15:56.806499 | orchestrator | | name | test | 2026-01-10 15:15:56.806503 | orchestrator | | pinned_availability_zone | None | 2026-01-10 15:15:56.806506 | orchestrator | | progress | 0 | 2026-01-10 15:15:56.806513 | orchestrator | | project_id | 20856663526947a483b6c37b707ba280 | 2026-01-10 15:15:56.806516 | orchestrator | | properties | hostname='test' | 2026-01-10 15:15:56.806524 | orchestrator | | security_groups | name='icmp' | 2026-01-10 15:15:56.806528 | orchestrator | | | name='ssh' | 2026-01-10 15:15:56.806532 | orchestrator | | server_groups | None | 2026-01-10 15:15:56.806538 | orchestrator | | status | ACTIVE | 2026-01-10 15:15:56.806541 | orchestrator | | tags | test | 2026-01-10 15:15:56.806545 | orchestrator | | trusted_image_certificates | None | 2026-01-10 15:15:56.806549 | orchestrator | | updated | 2026-01-10T15:14:31Z | 2026-01-10 15:15:56.806558 | orchestrator | | user_id | 7d83a658b7c94dbd94f80e50d81922ef | 2026-01-10 15:15:56.806562 | orchestrator | | volumes_attached | delete_on_termination='True', id='38c98d06-6244-4e6e-be39-7f0235b7e864' | 2026-01-10 15:15:56.806566 | orchestrator | | | delete_on_termination='False', id='b44f84bb-ad21-4faf-83c0-f30e936e496b' | 2026-01-10 15:15:56.809390 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:15:57.101414 | orchestrator | + openstack --os-cloud test server show test-1 2026-01-10 15:16:00.324569 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:16:00.324656 | orchestrator | | Field | Value | 2026-01-10 15:16:00.324665 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:16:00.324672 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-10 15:16:00.324679 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-10 15:16:00.324700 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-10 15:16:00.324706 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-01-10 15:16:00.324712 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-10 15:16:00.324718 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-10 15:16:00.324737 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-10 15:16:00.324743 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-10 15:16:00.324753 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-10 15:16:00.324759 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-10 15:16:00.324765 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-10 15:16:00.324775 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-10 15:16:00.324781 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-10 15:16:00.324787 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-10 15:16:00.324793 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-10 15:16:00.324799 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-10T15:11:51.000000 | 2026-01-10 15:16:00.324810 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-10 15:16:00.324816 | orchestrator | | accessIPv4 | | 2026-01-10 15:16:00.324822 | orchestrator | | accessIPv6 | | 2026-01-10 15:16:00.324827 | orchestrator | | addresses | test=192.168.112.137, 192.168.200.99 | 2026-01-10 15:16:00.324833 | orchestrator | | config_drive | | 2026-01-10 15:16:00.324847 | orchestrator | | created | 2026-01-10T15:11:16Z | 2026-01-10 15:16:00.324854 | orchestrator | | description | None | 2026-01-10 15:16:00.324860 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-10 15:16:00.324867 | orchestrator | | hostId | da49c6f976d2754f6c4452f4ff18574b7d8fe1f5c259f1877d4b7b5d | 2026-01-10 15:16:00.324873 | orchestrator | | host_status | None | 2026-01-10 15:16:00.324883 | orchestrator | | id | 772b9824-2b94-49ac-b8b8-a2b3e13bdb1d | 2026-01-10 15:16:00.324892 | orchestrator | | image | N/A (booted from volume) | 2026-01-10 15:16:00.324901 | orchestrator | | key_name | test | 2026-01-10 15:16:00.324910 | orchestrator | | locked | False | 2026-01-10 15:16:00.324923 | orchestrator | | locked_reason | None | 2026-01-10 15:16:00.324929 | orchestrator | | name | test-1 | 2026-01-10 15:16:00.324935 | orchestrator | | pinned_availability_zone | None | 2026-01-10 15:16:00.324941 | orchestrator | | progress | 0 | 2026-01-10 15:16:00.324947 | orchestrator | | project_id | 20856663526947a483b6c37b707ba280 | 2026-01-10 15:16:00.324953 | orchestrator | | properties | hostname='test-1' | 2026-01-10 15:16:00.324964 | orchestrator | | security_groups | name='icmp' | 2026-01-10 15:16:00.324969 | orchestrator | | | name='ssh' | 2026-01-10 15:16:00.324978 | orchestrator | | server_groups | None | 2026-01-10 15:16:00.324988 | orchestrator | | status | ACTIVE | 2026-01-10 15:16:00.324995 | orchestrator | | tags | test | 2026-01-10 15:16:00.325000 | orchestrator | | trusted_image_certificates | None | 2026-01-10 15:16:00.325006 | orchestrator | | updated | 2026-01-10T15:14:36Z | 2026-01-10 15:16:00.325013 | orchestrator | | user_id | 7d83a658b7c94dbd94f80e50d81922ef | 2026-01-10 15:16:00.325018 | orchestrator | | volumes_attached | delete_on_termination='True', id='1887c576-9abb-410f-b179-6f05daf1818f' | 2026-01-10 15:16:00.327950 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:16:00.600246 | orchestrator | + openstack --os-cloud test server show test-2 2026-01-10 15:16:03.580442 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:16:03.580505 | orchestrator | | Field | Value | 2026-01-10 15:16:03.580539 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:16:03.580549 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-10 15:16:03.580553 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-10 15:16:03.580557 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-10 15:16:03.580561 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-01-10 15:16:03.580565 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-10 15:16:03.580569 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-10 15:16:03.580581 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-10 15:16:03.580585 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-10 15:16:03.580593 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-10 15:16:03.580598 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-10 15:16:03.580603 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-10 15:16:03.580607 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-10 15:16:03.580611 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-10 15:16:03.580615 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-10 15:16:03.580618 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-10 15:16:03.580622 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-10T15:12:47.000000 | 2026-01-10 15:16:03.580629 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-10 15:16:03.580634 | orchestrator | | accessIPv4 | | 2026-01-10 15:16:03.580640 | orchestrator | | accessIPv6 | | 2026-01-10 15:16:03.580645 | orchestrator | | addresses | test=192.168.112.103, 192.168.200.225 | 2026-01-10 15:16:03.580649 | orchestrator | | config_drive | | 2026-01-10 15:16:03.580653 | orchestrator | | created | 2026-01-10T15:12:11Z | 2026-01-10 15:16:03.580657 | orchestrator | | description | None | 2026-01-10 15:16:03.580661 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-10 15:16:03.580665 | orchestrator | | hostId | e7132fd45fef2cfd111160cec1b6e258a8ba37dd7f8eb06fa6197ee6 | 2026-01-10 15:16:03.580669 | orchestrator | | host_status | None | 2026-01-10 15:16:03.580676 | orchestrator | | id | efc00d2b-21cb-4b3e-ab5a-f5934314596c | 2026-01-10 15:16:03.580683 | orchestrator | | image | N/A (booted from volume) | 2026-01-10 15:16:03.580689 | orchestrator | | key_name | test | 2026-01-10 15:16:03.580693 | orchestrator | | locked | False | 2026-01-10 15:16:03.580697 | orchestrator | | locked_reason | None | 2026-01-10 15:16:03.580701 | orchestrator | | name | test-2 | 2026-01-10 15:16:03.580704 | orchestrator | | pinned_availability_zone | None | 2026-01-10 15:16:03.580708 | orchestrator | | progress | 0 | 2026-01-10 15:16:03.580712 | orchestrator | | project_id | 20856663526947a483b6c37b707ba280 | 2026-01-10 15:16:03.580716 | orchestrator | | properties | hostname='test-2' | 2026-01-10 15:16:03.580726 | orchestrator | | security_groups | name='icmp' | 2026-01-10 15:16:03.580730 | orchestrator | | | name='ssh' | 2026-01-10 15:16:03.580734 | orchestrator | | server_groups | None | 2026-01-10 15:16:03.580741 | orchestrator | | status | ACTIVE | 2026-01-10 15:16:03.580745 | orchestrator | | tags | test | 2026-01-10 15:16:03.580749 | orchestrator | | trusted_image_certificates | None | 2026-01-10 15:16:03.580753 | orchestrator | | updated | 2026-01-10T15:14:41Z | 2026-01-10 15:16:03.580757 | orchestrator | | user_id | 7d83a658b7c94dbd94f80e50d81922ef | 2026-01-10 15:16:03.580761 | orchestrator | | volumes_attached | delete_on_termination='True', id='9fe27f3d-c43e-4ee7-af3a-c3f4c710c2c1' | 2026-01-10 15:16:03.586078 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:16:03.898744 | orchestrator | + openstack --os-cloud test server show test-3 2026-01-10 15:16:06.961691 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:16:06.961805 | orchestrator | | Field | Value | 2026-01-10 15:16:06.961850 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:16:06.961866 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-10 15:16:06.961874 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-10 15:16:06.961881 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-10 15:16:06.961888 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-01-10 15:16:06.961895 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-10 15:16:06.961928 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-10 15:16:06.961959 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-10 15:16:06.961967 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-10 15:16:06.961974 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-10 15:16:06.961984 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-10 15:16:06.961992 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-10 15:16:06.961998 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-10 15:16:06.962005 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-10 15:16:06.962054 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-10 15:16:06.962062 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-10 15:16:06.962077 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-10T15:13:31.000000 | 2026-01-10 15:16:06.962089 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-10 15:16:06.962097 | orchestrator | | accessIPv4 | | 2026-01-10 15:16:06.962121 | orchestrator | | accessIPv6 | | 2026-01-10 15:16:06.962132 | orchestrator | | addresses | test=192.168.112.193, 192.168.200.180 | 2026-01-10 15:16:06.962139 | orchestrator | | config_drive | | 2026-01-10 15:16:06.962146 | orchestrator | | created | 2026-01-10T15:13:06Z | 2026-01-10 15:16:06.962153 | orchestrator | | description | None | 2026-01-10 15:16:06.962160 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-10 15:16:06.962172 | orchestrator | | hostId | da49c6f976d2754f6c4452f4ff18574b7d8fe1f5c259f1877d4b7b5d | 2026-01-10 15:16:06.962179 | orchestrator | | host_status | None | 2026-01-10 15:16:06.962191 | orchestrator | | id | e4b75765-7650-495e-8220-080cfcd2610c | 2026-01-10 15:16:06.962205 | orchestrator | | image | N/A (booted from volume) | 2026-01-10 15:16:06.962212 | orchestrator | | key_name | test | 2026-01-10 15:16:06.962224 | orchestrator | | locked | False | 2026-01-10 15:16:06.962231 | orchestrator | | locked_reason | None | 2026-01-10 15:16:06.962238 | orchestrator | | name | test-3 | 2026-01-10 15:16:06.962245 | orchestrator | | pinned_availability_zone | None | 2026-01-10 15:16:06.962257 | orchestrator | | progress | 0 | 2026-01-10 15:16:06.962264 | orchestrator | | project_id | 20856663526947a483b6c37b707ba280 | 2026-01-10 15:16:06.962270 | orchestrator | | properties | hostname='test-3' | 2026-01-10 15:16:06.962282 | orchestrator | | security_groups | name='icmp' | 2026-01-10 15:16:06.962290 | orchestrator | | | name='ssh' | 2026-01-10 15:16:06.962296 | orchestrator | | server_groups | None | 2026-01-10 15:16:06.962306 | orchestrator | | status | ACTIVE | 2026-01-10 15:16:06.962313 | orchestrator | | tags | test | 2026-01-10 15:16:06.962320 | orchestrator | | trusted_image_certificates | None | 2026-01-10 15:16:06.962331 | orchestrator | | updated | 2026-01-10T15:14:45Z | 2026-01-10 15:16:06.962338 | orchestrator | | user_id | 7d83a658b7c94dbd94f80e50d81922ef | 2026-01-10 15:16:06.962345 | orchestrator | | volumes_attached | delete_on_termination='True', id='9e68ec4d-35b2-4a86-add7-428cab5aaf18' | 2026-01-10 15:16:06.973455 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:16:07.257416 | orchestrator | + openstack --os-cloud test server show test-4 2026-01-10 15:16:10.163393 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:16:10.163495 | orchestrator | | Field | Value | 2026-01-10 15:16:10.163521 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:16:10.163528 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-10 15:16:10.163534 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-10 15:16:10.163562 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-10 15:16:10.163570 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-01-10 15:16:10.163577 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-10 15:16:10.163584 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-10 15:16:10.163605 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-10 15:16:10.163613 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-10 15:16:10.163620 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-10 15:16:10.163627 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-10 15:16:10.163922 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-10 15:16:10.163941 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-10 15:16:10.163958 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-10 15:16:10.163967 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-10 15:16:10.163973 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-10 15:16:10.163979 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-10T15:14:13.000000 | 2026-01-10 15:16:10.163994 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-10 15:16:10.164006 | orchestrator | | accessIPv4 | | 2026-01-10 15:16:10.164013 | orchestrator | | accessIPv6 | | 2026-01-10 15:16:10.164020 | orchestrator | | addresses | test=192.168.112.152, 192.168.200.63 | 2026-01-10 15:16:10.164026 | orchestrator | | config_drive | | 2026-01-10 15:16:10.164038 | orchestrator | | created | 2026-01-10T15:13:48Z | 2026-01-10 15:16:10.164044 | orchestrator | | description | None | 2026-01-10 15:16:10.164050 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-10 15:16:10.164055 | orchestrator | | hostId | e7132fd45fef2cfd111160cec1b6e258a8ba37dd7f8eb06fa6197ee6 | 2026-01-10 15:16:10.164061 | orchestrator | | host_status | None | 2026-01-10 15:16:10.164077 | orchestrator | | id | 97961733-5d8f-42bb-95b8-644b1acc8730 | 2026-01-10 15:16:10.164084 | orchestrator | | image | N/A (booted from volume) | 2026-01-10 15:16:10.164091 | orchestrator | | key_name | test | 2026-01-10 15:16:10.164097 | orchestrator | | locked | False | 2026-01-10 15:16:10.164128 | orchestrator | | locked_reason | None | 2026-01-10 15:16:10.164135 | orchestrator | | name | test-4 | 2026-01-10 15:16:10.164141 | orchestrator | | pinned_availability_zone | None | 2026-01-10 15:16:10.164147 | orchestrator | | progress | 0 | 2026-01-10 15:16:10.164151 | orchestrator | | project_id | 20856663526947a483b6c37b707ba280 | 2026-01-10 15:16:10.164154 | orchestrator | | properties | hostname='test-4' | 2026-01-10 15:16:10.164166 | orchestrator | | security_groups | name='icmp' | 2026-01-10 15:16:10.164170 | orchestrator | | | name='ssh' | 2026-01-10 15:16:10.164174 | orchestrator | | server_groups | None | 2026-01-10 15:16:10.164181 | orchestrator | | status | ACTIVE | 2026-01-10 15:16:10.164184 | orchestrator | | tags | test | 2026-01-10 15:16:10.164188 | orchestrator | | trusted_image_certificates | None | 2026-01-10 15:16:10.164192 | orchestrator | | updated | 2026-01-10T15:14:49Z | 2026-01-10 15:16:10.164196 | orchestrator | | user_id | 7d83a658b7c94dbd94f80e50d81922ef | 2026-01-10 15:16:10.164200 | orchestrator | | volumes_attached | delete_on_termination='True', id='e43cc69e-870c-44f3-ac5c-08e06183d687' | 2026-01-10 15:16:10.168401 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:16:10.456664 | orchestrator | + server_ping 2026-01-10 15:16:10.458088 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-01-10 15:16:10.458668 | orchestrator | ++ tr -d '\r' 2026-01-10 15:16:13.308856 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:16:13.309244 | orchestrator | + ping -c3 192.168.112.112 2026-01-10 15:16:13.324646 | orchestrator | PING 192.168.112.112 (192.168.112.112) 56(84) bytes of data. 2026-01-10 15:16:13.324733 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=1 ttl=63 time=5.34 ms 2026-01-10 15:16:14.322948 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=2 ttl=63 time=2.41 ms 2026-01-10 15:16:15.324341 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=3 ttl=63 time=2.23 ms 2026-01-10 15:16:15.324454 | orchestrator | 2026-01-10 15:16:15.324465 | orchestrator | --- 192.168.112.112 ping statistics --- 2026-01-10 15:16:15.324473 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-01-10 15:16:15.324480 | orchestrator | rtt min/avg/max/mdev = 2.225/3.322/5.337/1.426 ms 2026-01-10 15:16:15.324974 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:16:15.325039 | orchestrator | + ping -c3 192.168.112.193 2026-01-10 15:16:15.340314 | orchestrator | PING 192.168.112.193 (192.168.112.193) 56(84) bytes of data. 2026-01-10 15:16:15.340404 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=1 ttl=63 time=9.92 ms 2026-01-10 15:16:16.333617 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=2 ttl=63 time=2.01 ms 2026-01-10 15:16:17.335424 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=3 ttl=63 time=1.76 ms 2026-01-10 15:16:17.335505 | orchestrator | 2026-01-10 15:16:17.335514 | orchestrator | --- 192.168.112.193 ping statistics --- 2026-01-10 15:16:17.335522 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-10 15:16:17.335529 | orchestrator | rtt min/avg/max/mdev = 1.755/4.561/9.917/3.788 ms 2026-01-10 15:16:17.335537 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:16:17.335544 | orchestrator | + ping -c3 192.168.112.152 2026-01-10 15:16:17.349158 | orchestrator | PING 192.168.112.152 (192.168.112.152) 56(84) bytes of data. 2026-01-10 15:16:17.349245 | orchestrator | 64 bytes from 192.168.112.152: icmp_seq=1 ttl=63 time=7.84 ms 2026-01-10 15:16:18.345328 | orchestrator | 64 bytes from 192.168.112.152: icmp_seq=2 ttl=63 time=2.75 ms 2026-01-10 15:16:19.346748 | orchestrator | 64 bytes from 192.168.112.152: icmp_seq=3 ttl=63 time=2.07 ms 2026-01-10 15:16:19.346826 | orchestrator | 2026-01-10 15:16:19.346833 | orchestrator | --- 192.168.112.152 ping statistics --- 2026-01-10 15:16:19.346840 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-10 15:16:19.346845 | orchestrator | rtt min/avg/max/mdev = 2.065/4.216/7.839/2.576 ms 2026-01-10 15:16:19.346851 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:16:19.346856 | orchestrator | + ping -c3 192.168.112.103 2026-01-10 15:16:19.359609 | orchestrator | PING 192.168.112.103 (192.168.112.103) 56(84) bytes of data. 2026-01-10 15:16:19.359700 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=1 ttl=63 time=7.19 ms 2026-01-10 15:16:20.356732 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=2 ttl=63 time=2.56 ms 2026-01-10 15:16:21.358064 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=3 ttl=63 time=1.93 ms 2026-01-10 15:16:21.358175 | orchestrator | 2026-01-10 15:16:21.358183 | orchestrator | --- 192.168.112.103 ping statistics --- 2026-01-10 15:16:21.358190 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-10 15:16:21.358195 | orchestrator | rtt min/avg/max/mdev = 1.927/3.894/7.193/2.346 ms 2026-01-10 15:16:21.358672 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:16:21.358718 | orchestrator | + ping -c3 192.168.112.137 2026-01-10 15:16:21.369085 | orchestrator | PING 192.168.112.137 (192.168.112.137) 56(84) bytes of data. 2026-01-10 15:16:21.369183 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=1 ttl=63 time=4.55 ms 2026-01-10 15:16:22.368752 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=2 ttl=63 time=2.33 ms 2026-01-10 15:16:23.370268 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=3 ttl=63 time=1.47 ms 2026-01-10 15:16:23.370344 | orchestrator | 2026-01-10 15:16:23.370354 | orchestrator | --- 192.168.112.137 ping statistics --- 2026-01-10 15:16:23.370363 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-10 15:16:23.370369 | orchestrator | rtt min/avg/max/mdev = 1.468/2.782/4.549/1.297 ms 2026-01-10 15:16:23.370376 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-01-10 15:16:23.787239 | orchestrator | ok: Runtime: 0:12:22.728953 2026-01-10 15:16:23.848593 | 2026-01-10 15:16:23.848850 | TASK [Run tempest] 2026-01-10 15:16:24.395509 | orchestrator | skipping: Conditional result was False 2026-01-10 15:16:24.413867 | 2026-01-10 15:16:24.414045 | TASK [Check prometheus alert status] 2026-01-10 15:16:24.955374 | orchestrator | skipping: Conditional result was False 2026-01-10 15:16:24.958901 | 2026-01-10 15:16:24.959128 | PLAY RECAP 2026-01-10 15:16:24.959347 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2026-01-10 15:16:24.959435 | 2026-01-10 15:16:25.239554 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-01-10 15:16:25.240861 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-10 15:16:26.050645 | 2026-01-10 15:16:26.050864 | PLAY [Post output play] 2026-01-10 15:16:26.070328 | 2026-01-10 15:16:26.070506 | LOOP [stage-output : Register sources] 2026-01-10 15:16:26.152720 | 2026-01-10 15:16:26.153122 | TASK [stage-output : Check sudo] 2026-01-10 15:16:27.061695 | orchestrator | sudo: a password is required 2026-01-10 15:16:27.197643 | orchestrator | ok: Runtime: 0:00:00.013685 2026-01-10 15:16:27.213593 | 2026-01-10 15:16:27.213781 | LOOP [stage-output : Set source and destination for files and folders] 2026-01-10 15:16:27.263147 | 2026-01-10 15:16:27.263549 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-01-10 15:16:27.343735 | orchestrator | ok 2026-01-10 15:16:27.353912 | 2026-01-10 15:16:27.354125 | LOOP [stage-output : Ensure target folders exist] 2026-01-10 15:16:27.874009 | orchestrator | ok: "docs" 2026-01-10 15:16:27.874347 | 2026-01-10 15:16:28.160837 | orchestrator | ok: "artifacts" 2026-01-10 15:16:28.457484 | orchestrator | ok: "logs" 2026-01-10 15:16:28.474348 | 2026-01-10 15:16:28.474529 | LOOP [stage-output : Copy files and folders to staging folder] 2026-01-10 15:16:28.511190 | 2026-01-10 15:16:28.511484 | TASK [stage-output : Make all log files readable] 2026-01-10 15:16:28.876880 | orchestrator | ok 2026-01-10 15:16:28.886370 | 2026-01-10 15:16:28.886540 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-01-10 15:16:28.931892 | orchestrator | skipping: Conditional result was False 2026-01-10 15:16:28.942166 | 2026-01-10 15:16:28.942332 | TASK [stage-output : Discover log files for compression] 2026-01-10 15:16:28.977557 | orchestrator | skipping: Conditional result was False 2026-01-10 15:16:28.991974 | 2026-01-10 15:16:28.992156 | LOOP [stage-output : Archive everything from logs] 2026-01-10 15:16:29.049682 | 2026-01-10 15:16:29.049879 | PLAY [Post cleanup play] 2026-01-10 15:16:29.059046 | 2026-01-10 15:16:29.059172 | TASK [Set cloud fact (Zuul deployment)] 2026-01-10 15:16:29.126661 | orchestrator | ok 2026-01-10 15:16:29.139093 | 2026-01-10 15:16:29.139229 | TASK [Set cloud fact (local deployment)] 2026-01-10 15:16:29.174079 | orchestrator | skipping: Conditional result was False 2026-01-10 15:16:29.189939 | 2026-01-10 15:16:29.190111 | TASK [Clean the cloud environment] 2026-01-10 15:16:29.909348 | orchestrator | 2026-01-10 15:16:29 - clean up servers 2026-01-10 15:16:30.669708 | orchestrator | 2026-01-10 15:16:30 - testbed-manager 2026-01-10 15:16:30.753728 | orchestrator | 2026-01-10 15:16:30 - testbed-node-3 2026-01-10 15:16:30.862523 | orchestrator | 2026-01-10 15:16:30 - testbed-node-1 2026-01-10 15:16:30.949004 | orchestrator | 2026-01-10 15:16:30 - testbed-node-5 2026-01-10 15:16:31.045760 | orchestrator | 2026-01-10 15:16:31 - testbed-node-0 2026-01-10 15:16:31.136948 | orchestrator | 2026-01-10 15:16:31 - testbed-node-4 2026-01-10 15:16:31.222428 | orchestrator | 2026-01-10 15:16:31 - testbed-node-2 2026-01-10 15:16:31.304717 | orchestrator | 2026-01-10 15:16:31 - clean up keypairs 2026-01-10 15:16:31.320328 | orchestrator | 2026-01-10 15:16:31 - testbed 2026-01-10 15:16:31.340244 | orchestrator | 2026-01-10 15:16:31 - wait for servers to be gone 2026-01-10 15:16:42.217052 | orchestrator | 2026-01-10 15:16:42 - clean up ports 2026-01-10 15:16:42.388407 | orchestrator | 2026-01-10 15:16:42 - 5f809922-599e-48be-9dba-dfc332b315c5 2026-01-10 15:16:42.624713 | orchestrator | 2026-01-10 15:16:42 - 823370be-01de-4e3d-843b-1381d7bc86d3 2026-01-10 15:16:42.926150 | orchestrator | 2026-01-10 15:16:42 - aa9dd122-1d06-4920-9886-e30a6381a70b 2026-01-10 15:16:43.366228 | orchestrator | 2026-01-10 15:16:43 - b355c9ad-bc69-46ae-b50c-5173bc3f8795 2026-01-10 15:16:43.591928 | orchestrator | 2026-01-10 15:16:43 - e633335b-8cfe-4e2d-885e-c58e3ca2aa4e 2026-01-10 15:16:43.796492 | orchestrator | 2026-01-10 15:16:43 - f894c187-b1e4-4df3-92de-3337a802535e 2026-01-10 15:16:43.998431 | orchestrator | 2026-01-10 15:16:43 - f9183ee1-95b4-45ce-9f4d-ba8a2a522a34 2026-01-10 15:16:44.214304 | orchestrator | 2026-01-10 15:16:44 - clean up volumes 2026-01-10 15:16:44.329938 | orchestrator | 2026-01-10 15:16:44 - testbed-volume-manager-base 2026-01-10 15:16:44.367565 | orchestrator | 2026-01-10 15:16:44 - testbed-volume-4-node-base 2026-01-10 15:16:44.405165 | orchestrator | 2026-01-10 15:16:44 - testbed-volume-1-node-base 2026-01-10 15:16:44.444480 | orchestrator | 2026-01-10 15:16:44 - testbed-volume-3-node-base 2026-01-10 15:16:44.486577 | orchestrator | 2026-01-10 15:16:44 - testbed-volume-2-node-base 2026-01-10 15:16:44.530150 | orchestrator | 2026-01-10 15:16:44 - testbed-volume-5-node-base 2026-01-10 15:16:44.572787 | orchestrator | 2026-01-10 15:16:44 - testbed-volume-1-node-4 2026-01-10 15:16:44.611475 | orchestrator | 2026-01-10 15:16:44 - testbed-volume-0-node-base 2026-01-10 15:16:44.649967 | orchestrator | 2026-01-10 15:16:44 - testbed-volume-7-node-4 2026-01-10 15:16:44.692158 | orchestrator | 2026-01-10 15:16:44 - testbed-volume-4-node-4 2026-01-10 15:16:44.736228 | orchestrator | 2026-01-10 15:16:44 - testbed-volume-8-node-5 2026-01-10 15:16:44.777150 | orchestrator | 2026-01-10 15:16:44 - testbed-volume-6-node-3 2026-01-10 15:16:44.818004 | orchestrator | 2026-01-10 15:16:44 - testbed-volume-3-node-3 2026-01-10 15:16:44.855667 | orchestrator | 2026-01-10 15:16:44 - testbed-volume-5-node-5 2026-01-10 15:16:44.896237 | orchestrator | 2026-01-10 15:16:44 - testbed-volume-2-node-5 2026-01-10 15:16:44.937695 | orchestrator | 2026-01-10 15:16:44 - testbed-volume-0-node-3 2026-01-10 15:16:44.982137 | orchestrator | 2026-01-10 15:16:44 - disconnect routers 2026-01-10 15:16:45.087352 | orchestrator | 2026-01-10 15:16:45 - testbed 2026-01-10 15:16:46.134000 | orchestrator | 2026-01-10 15:16:46 - clean up subnets 2026-01-10 15:16:46.206359 | orchestrator | 2026-01-10 15:16:46 - subnet-testbed-management 2026-01-10 15:16:46.372666 | orchestrator | 2026-01-10 15:16:46 - clean up networks 2026-01-10 15:16:46.546592 | orchestrator | 2026-01-10 15:16:46 - net-testbed-management 2026-01-10 15:16:46.855784 | orchestrator | 2026-01-10 15:16:46 - clean up security groups 2026-01-10 15:16:46.893728 | orchestrator | 2026-01-10 15:16:46 - testbed-management 2026-01-10 15:16:47.025216 | orchestrator | 2026-01-10 15:16:47 - testbed-node 2026-01-10 15:16:47.142252 | orchestrator | 2026-01-10 15:16:47 - clean up floating ips 2026-01-10 15:16:47.181725 | orchestrator | 2026-01-10 15:16:47 - 81.163.192.106 2026-01-10 15:16:47.548097 | orchestrator | 2026-01-10 15:16:47 - clean up routers 2026-01-10 15:16:47.642880 | orchestrator | 2026-01-10 15:16:47 - testbed 2026-01-10 15:16:48.752361 | orchestrator | ok: Runtime: 0:00:19.015472 2026-01-10 15:16:48.757130 | 2026-01-10 15:16:48.757425 | PLAY RECAP 2026-01-10 15:16:48.757614 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-01-10 15:16:48.757716 | 2026-01-10 15:16:48.921905 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-10 15:16:48.924423 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-10 15:16:49.722507 | 2026-01-10 15:16:49.722685 | PLAY [Cleanup play] 2026-01-10 15:16:49.739730 | 2026-01-10 15:16:49.739890 | TASK [Set cloud fact (Zuul deployment)] 2026-01-10 15:16:49.804605 | orchestrator | ok 2026-01-10 15:16:49.816109 | 2026-01-10 15:16:49.816365 | TASK [Set cloud fact (local deployment)] 2026-01-10 15:16:49.842322 | orchestrator | skipping: Conditional result was False 2026-01-10 15:16:49.851409 | 2026-01-10 15:16:49.851555 | TASK [Clean the cloud environment] 2026-01-10 15:16:51.078960 | orchestrator | 2026-01-10 15:16:51 - clean up servers 2026-01-10 15:16:51.562182 | orchestrator | 2026-01-10 15:16:51 - clean up keypairs 2026-01-10 15:16:51.577355 | orchestrator | 2026-01-10 15:16:51 - wait for servers to be gone 2026-01-10 15:16:51.630322 | orchestrator | 2026-01-10 15:16:51 - clean up ports 2026-01-10 15:16:51.723647 | orchestrator | 2026-01-10 15:16:51 - clean up volumes 2026-01-10 15:16:51.795462 | orchestrator | 2026-01-10 15:16:51 - disconnect routers 2026-01-10 15:16:51.824127 | orchestrator | 2026-01-10 15:16:51 - clean up subnets 2026-01-10 15:16:51.849129 | orchestrator | 2026-01-10 15:16:51 - clean up networks 2026-01-10 15:16:52.420677 | orchestrator | 2026-01-10 15:16:52 - clean up security groups 2026-01-10 15:16:52.455891 | orchestrator | 2026-01-10 15:16:52 - clean up floating ips 2026-01-10 15:16:52.488741 | orchestrator | 2026-01-10 15:16:52 - clean up routers 2026-01-10 15:16:52.891276 | orchestrator | ok: Runtime: 0:00:01.864673 2026-01-10 15:16:52.894760 | 2026-01-10 15:16:52.894964 | PLAY RECAP 2026-01-10 15:16:52.895084 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-01-10 15:16:52.895148 | 2026-01-10 15:16:53.032458 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-10 15:16:53.035559 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-10 15:16:53.829879 | 2026-01-10 15:16:53.830059 | PLAY [Base post-fetch] 2026-01-10 15:16:53.846798 | 2026-01-10 15:16:53.847056 | TASK [fetch-output : Set log path for multiple nodes] 2026-01-10 15:16:53.902914 | orchestrator | skipping: Conditional result was False 2026-01-10 15:16:53.909873 | 2026-01-10 15:16:53.910036 | TASK [fetch-output : Set log path for single node] 2026-01-10 15:16:53.967495 | orchestrator | ok 2026-01-10 15:16:53.979017 | 2026-01-10 15:16:53.979227 | LOOP [fetch-output : Ensure local output dirs] 2026-01-10 15:16:54.569231 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/e5b662a5167846cbb307ba316b919d7d/work/logs" 2026-01-10 15:16:54.864164 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/e5b662a5167846cbb307ba316b919d7d/work/artifacts" 2026-01-10 15:16:55.168843 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/e5b662a5167846cbb307ba316b919d7d/work/docs" 2026-01-10 15:16:55.184508 | 2026-01-10 15:16:55.184645 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-01-10 15:16:56.147202 | orchestrator | changed: .d..t...... ./ 2026-01-10 15:16:56.147624 | orchestrator | changed: All items complete 2026-01-10 15:16:56.147678 | 2026-01-10 15:16:56.885418 | orchestrator | changed: .d..t...... ./ 2026-01-10 15:16:57.687590 | orchestrator | changed: .d..t...... ./ 2026-01-10 15:16:57.721945 | 2026-01-10 15:16:57.722125 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-01-10 15:16:58.350522 | orchestrator -> localhost | ok: Item: artifacts Runtime: 0:00:00.017666 2026-01-10 15:16:58.657935 | orchestrator -> localhost | ok: Item: docs Runtime: 0:00:00.010905 2026-01-10 15:16:58.679152 | 2026-01-10 15:16:58.679327 | PLAY RECAP 2026-01-10 15:16:58.679410 | orchestrator | ok: 4 changed: 3 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-01-10 15:16:58.679450 | 2026-01-10 15:16:58.823604 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-10 15:16:58.826169 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-10 15:16:59.655565 | 2026-01-10 15:16:59.655739 | PLAY [Base post] 2026-01-10 15:16:59.671096 | 2026-01-10 15:16:59.671285 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-01-10 15:17:00.717098 | orchestrator | changed 2026-01-10 15:17:00.727292 | 2026-01-10 15:17:00.727446 | PLAY RECAP 2026-01-10 15:17:00.727522 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-01-10 15:17:00.727595 | 2026-01-10 15:17:00.861195 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-10 15:17:00.862263 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-01-10 15:17:01.691197 | 2026-01-10 15:17:01.691425 | PLAY [Base post-logs] 2026-01-10 15:17:01.702880 | 2026-01-10 15:17:01.703042 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-01-10 15:17:02.190678 | localhost | changed 2026-01-10 15:17:02.209351 | 2026-01-10 15:17:02.209562 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-01-10 15:17:02.249325 | localhost | ok 2026-01-10 15:17:02.257028 | 2026-01-10 15:17:02.257223 | TASK [Set zuul-log-path fact] 2026-01-10 15:17:02.288002 | localhost | ok 2026-01-10 15:17:02.303382 | 2026-01-10 15:17:02.303583 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-10 15:17:02.344143 | localhost | ok 2026-01-10 15:17:02.351610 | 2026-01-10 15:17:02.351812 | TASK [upload-logs : Create log directories] 2026-01-10 15:17:02.870185 | localhost | changed 2026-01-10 15:17:02.873302 | 2026-01-10 15:17:02.873417 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-01-10 15:17:03.417955 | localhost -> localhost | ok: Runtime: 0:00:00.008851 2026-01-10 15:17:03.423207 | 2026-01-10 15:17:03.423361 | TASK [upload-logs : Upload logs to log server] 2026-01-10 15:17:04.052538 | localhost | Output suppressed because no_log was given 2026-01-10 15:17:04.056673 | 2026-01-10 15:17:04.056874 | LOOP [upload-logs : Compress console log and json output] 2026-01-10 15:17:04.128424 | localhost | skipping: Conditional result was False 2026-01-10 15:17:04.134744 | localhost | skipping: Conditional result was False 2026-01-10 15:17:04.142434 | 2026-01-10 15:17:04.142683 | LOOP [upload-logs : Upload compressed console log and json output] 2026-01-10 15:17:04.201211 | localhost | skipping: Conditional result was False 2026-01-10 15:17:04.201847 | 2026-01-10 15:17:04.213468 | localhost | skipping: Conditional result was False 2026-01-10 15:17:04.228282 | 2026-01-10 15:17:04.228489 | LOOP [upload-logs : Upload console log and json output]